00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 596 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3262 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.027 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.044 Fetching changes from the remote Git repository 00:00:00.046 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.065 Using shallow fetch with depth 1 00:00:00.065 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.065 > git --version # timeout=10 00:00:00.093 > git --version # 'git version 2.39.2' 00:00:00.093 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.125 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.125 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.989 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.002 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.015 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.015 > git config core.sparsecheckout # timeout=10 00:00:03.026 > git read-tree -mu HEAD # timeout=10 00:00:03.043 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.064 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.064 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.165 [Pipeline] Start of Pipeline 00:00:03.177 [Pipeline] library 00:00:03.178 Loading library shm_lib@master 00:00:03.179 Library shm_lib@master is cached. Copying from home. 00:00:03.197 [Pipeline] node 00:00:03.203 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.204 [Pipeline] { 00:00:03.214 [Pipeline] catchError 00:00:03.216 [Pipeline] { 00:00:03.225 [Pipeline] wrap 00:00:03.232 [Pipeline] { 00:00:03.238 [Pipeline] stage 00:00:03.239 [Pipeline] { (Prologue) 00:00:03.406 [Pipeline] sh 00:00:03.683 + logger -p user.info -t JENKINS-CI 00:00:03.701 [Pipeline] echo 00:00:03.703 Node: WFP16 00:00:03.712 [Pipeline] sh 00:00:04.007 [Pipeline] setCustomBuildProperty 00:00:04.017 [Pipeline] echo 00:00:04.019 Cleanup processes 00:00:04.024 [Pipeline] sh 00:00:04.300 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.300 3805426 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.312 [Pipeline] sh 00:00:04.590 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.590 ++ grep -v 'sudo pgrep' 00:00:04.590 ++ awk '{print $1}' 00:00:04.590 + sudo kill -9 00:00:04.590 + true 00:00:04.606 [Pipeline] cleanWs 00:00:04.618 [WS-CLEANUP] Deleting project workspace... 00:00:04.618 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.623 [WS-CLEANUP] done 00:00:04.627 [Pipeline] setCustomBuildProperty 00:00:04.639 [Pipeline] sh 00:00:04.916 + sudo git config --global --replace-all safe.directory '*' 00:00:05.025 [Pipeline] httpRequest 00:00:05.042 [Pipeline] echo 00:00:05.043 Sorcerer 10.211.164.101 is alive 00:00:05.052 [Pipeline] httpRequest 00:00:05.057 HttpMethod: GET 00:00:05.057 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.058 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.059 Response Code: HTTP/1.1 200 OK 00:00:05.059 Success: Status code 200 is in the accepted range: 200,404 00:00:05.059 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.594 [Pipeline] sh 00:00:05.877 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.892 [Pipeline] httpRequest 00:00:05.906 [Pipeline] echo 00:00:05.907 Sorcerer 10.211.164.101 is alive 00:00:05.915 [Pipeline] httpRequest 00:00:05.919 HttpMethod: GET 00:00:05.920 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:05.920 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:05.922 Response Code: HTTP/1.1 200 OK 00:00:05.923 Success: Status code 200 is in the accepted range: 200,404 00:00:05.923 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:17.066 [Pipeline] sh 00:00:17.350 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:21.552 [Pipeline] sh 00:00:21.835 + git -C spdk log --oneline -n5 00:00:21.835 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:21.835 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:21.836 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:21.836 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:21.836 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:21.856 [Pipeline] withCredentials 00:00:21.868 > git --version # timeout=10 00:00:21.884 > git --version # 'git version 2.39.2' 00:00:21.901 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:21.903 [Pipeline] { 00:00:21.913 [Pipeline] retry 00:00:21.915 [Pipeline] { 00:00:21.932 [Pipeline] sh 00:00:22.218 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:27.503 [Pipeline] } 00:00:27.526 [Pipeline] // retry 00:00:27.532 [Pipeline] } 00:00:27.553 [Pipeline] // withCredentials 00:00:27.564 [Pipeline] httpRequest 00:00:27.583 [Pipeline] echo 00:00:27.585 Sorcerer 10.211.164.101 is alive 00:00:27.595 [Pipeline] httpRequest 00:00:27.600 HttpMethod: GET 00:00:27.600 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.601 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.603 Response Code: HTTP/1.1 200 OK 00:00:27.604 Success: Status code 200 is in the accepted range: 200,404 00:00:27.604 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.282 [Pipeline] sh 00:00:32.563 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:35.102 [Pipeline] sh 00:00:35.383 + git -C dpdk log --oneline -n5 00:00:35.384 eeb0605f11 version: 23.11.0 00:00:35.384 238778122a doc: update release notes for 23.11 00:00:35.384 46aa6b3cfc doc: fix description of RSS features 00:00:35.384 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:35.384 7e421ae345 devtools: support skipping forbid rule check 00:00:35.394 [Pipeline] } 00:00:35.412 [Pipeline] // stage 00:00:35.424 [Pipeline] stage 00:00:35.426 [Pipeline] { (Prepare) 00:00:35.452 [Pipeline] writeFile 00:00:35.473 [Pipeline] sh 00:00:35.754 + logger -p user.info -t JENKINS-CI 00:00:35.767 [Pipeline] sh 00:00:36.050 + logger -p user.info -t JENKINS-CI 00:00:36.063 [Pipeline] sh 00:00:36.344 + cat autorun-spdk.conf 00:00:36.344 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.344 SPDK_TEST_NVMF=1 00:00:36.344 SPDK_TEST_NVME_CLI=1 00:00:36.344 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.344 SPDK_TEST_NVMF_NICS=e810 00:00:36.344 SPDK_TEST_VFIOUSER=1 00:00:36.344 SPDK_RUN_UBSAN=1 00:00:36.344 NET_TYPE=phy 00:00:36.344 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:36.344 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.351 RUN_NIGHTLY=1 00:00:36.357 [Pipeline] readFile 00:00:36.384 [Pipeline] withEnv 00:00:36.386 [Pipeline] { 00:00:36.399 [Pipeline] sh 00:00:36.721 + set -ex 00:00:36.721 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:36.721 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:36.721 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.721 ++ SPDK_TEST_NVMF=1 00:00:36.721 ++ SPDK_TEST_NVME_CLI=1 00:00:36.721 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.721 ++ SPDK_TEST_NVMF_NICS=e810 00:00:36.721 ++ SPDK_TEST_VFIOUSER=1 00:00:36.721 ++ SPDK_RUN_UBSAN=1 00:00:36.721 ++ NET_TYPE=phy 00:00:36.721 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:36.721 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.721 ++ RUN_NIGHTLY=1 00:00:36.721 + case $SPDK_TEST_NVMF_NICS in 00:00:36.721 + DRIVERS=ice 00:00:36.721 + [[ tcp == \r\d\m\a ]] 00:00:36.721 + [[ -n ice ]] 00:00:36.721 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:36.721 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:36.721 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:36.721 rmmod: ERROR: Module irdma is not currently loaded 00:00:36.721 rmmod: ERROR: Module i40iw is not currently loaded 00:00:36.721 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:36.721 + true 00:00:36.721 + for D in $DRIVERS 00:00:36.721 + sudo modprobe ice 00:00:36.721 + exit 0 00:00:36.730 [Pipeline] } 00:00:36.749 [Pipeline] // withEnv 00:00:36.755 [Pipeline] } 00:00:36.772 [Pipeline] // stage 00:00:36.782 [Pipeline] catchError 00:00:36.784 [Pipeline] { 00:00:36.802 [Pipeline] timeout 00:00:36.803 Timeout set to expire in 50 min 00:00:36.804 [Pipeline] { 00:00:36.820 [Pipeline] stage 00:00:36.822 [Pipeline] { (Tests) 00:00:36.839 [Pipeline] sh 00:00:37.120 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.120 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.120 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.120 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:37.120 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:37.120 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:37.120 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:37.120 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:37.120 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:37.120 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:37.120 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:37.120 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.120 + source /etc/os-release 00:00:37.120 ++ NAME='Fedora Linux' 00:00:37.120 ++ VERSION='38 (Cloud Edition)' 00:00:37.120 ++ ID=fedora 00:00:37.120 ++ VERSION_ID=38 00:00:37.120 ++ VERSION_CODENAME= 00:00:37.120 ++ PLATFORM_ID=platform:f38 00:00:37.120 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:37.120 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:37.120 ++ LOGO=fedora-logo-icon 00:00:37.120 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:37.120 ++ HOME_URL=https://fedoraproject.org/ 00:00:37.120 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:37.120 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:37.120 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:37.120 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:37.120 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:37.120 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:37.120 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:37.120 ++ SUPPORT_END=2024-05-14 00:00:37.120 ++ VARIANT='Cloud Edition' 00:00:37.120 ++ VARIANT_ID=cloud 00:00:37.120 + uname -a 00:00:37.120 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:37.120 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:39.657 Hugepages 00:00:39.657 node hugesize free / total 00:00:39.657 node0 1048576kB 0 / 0 00:00:39.657 node0 2048kB 0 / 0 00:00:39.657 node1 1048576kB 0 / 0 00:00:39.657 node1 2048kB 0 / 0 00:00:39.657 00:00:39.657 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:39.657 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:39.657 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:39.657 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:39.657 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:39.657 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:39.657 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:39.657 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:39.657 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:39.657 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:39.657 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:39.657 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:39.657 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:39.658 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:39.658 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:39.658 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:39.658 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:39.658 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:39.658 + rm -f /tmp/spdk-ld-path 00:00:39.658 + source autorun-spdk.conf 00:00:39.658 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.658 ++ SPDK_TEST_NVMF=1 00:00:39.658 ++ SPDK_TEST_NVME_CLI=1 00:00:39.658 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.658 ++ SPDK_TEST_NVMF_NICS=e810 00:00:39.658 ++ SPDK_TEST_VFIOUSER=1 00:00:39.658 ++ SPDK_RUN_UBSAN=1 00:00:39.658 ++ NET_TYPE=phy 00:00:39.658 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:39.658 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.658 ++ RUN_NIGHTLY=1 00:00:39.658 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:39.658 + [[ -n '' ]] 00:00:39.658 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.658 + for M in /var/spdk/build-*-manifest.txt 00:00:39.658 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:39.658 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:39.658 + for M in /var/spdk/build-*-manifest.txt 00:00:39.658 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:39.658 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:39.658 ++ uname 00:00:39.658 + [[ Linux == \L\i\n\u\x ]] 00:00:39.658 + sudo dmesg -T 00:00:39.658 + sudo dmesg --clear 00:00:39.658 + dmesg_pid=3806370 00:00:39.658 + [[ Fedora Linux == FreeBSD ]] 00:00:39.658 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.658 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.658 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:39.658 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.658 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.658 + [[ -x /usr/src/fio-static/fio ]] 00:00:39.658 + export FIO_BIN=/usr/src/fio-static/fio 00:00:39.658 + FIO_BIN=/usr/src/fio-static/fio 00:00:39.658 + sudo dmesg -Tw 00:00:39.658 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:39.658 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:39.658 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:39.658 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.658 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.658 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:39.658 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.658 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.658 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:39.658 Test configuration: 00:00:39.658 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.658 SPDK_TEST_NVMF=1 00:00:39.658 SPDK_TEST_NVME_CLI=1 00:00:39.658 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.658 SPDK_TEST_NVMF_NICS=e810 00:00:39.658 SPDK_TEST_VFIOUSER=1 00:00:39.658 SPDK_RUN_UBSAN=1 00:00:39.658 NET_TYPE=phy 00:00:39.658 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:39.658 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.658 RUN_NIGHTLY=1 17:11:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:39.658 17:11:18 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:39.917 17:11:18 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:39.917 17:11:18 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:39.917 17:11:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.917 17:11:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.917 17:11:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.917 17:11:18 -- paths/export.sh@5 -- $ export PATH 00:00:39.917 17:11:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.917 17:11:18 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:39.917 17:11:18 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:39.917 17:11:18 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720797078.XXXXXX 00:00:39.917 17:11:18 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720797078.KCCBBd 00:00:39.917 17:11:18 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:39.917 17:11:18 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:00:39.917 17:11:18 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.917 17:11:18 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:39.917 17:11:18 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:39.917 17:11:18 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:39.917 17:11:18 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:39.917 17:11:18 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:39.917 17:11:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.917 17:11:18 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:39.917 17:11:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:39.917 17:11:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:39.917 17:11:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.917 17:11:18 -- spdk/autobuild.sh@16 -- $ date -u 00:00:39.917 Fri Jul 12 03:11:18 PM UTC 2024 00:00:39.917 17:11:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:39.917 LTS-59-g4b94202c6 00:00:39.917 17:11:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:39.917 17:11:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:39.917 17:11:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:39.917 17:11:18 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:39.917 17:11:18 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:39.917 17:11:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.917 ************************************ 00:00:39.917 START TEST ubsan 00:00:39.917 ************************************ 00:00:39.917 17:11:18 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:39.917 using ubsan 00:00:39.917 00:00:39.917 real 0m0.000s 00:00:39.917 user 0m0.000s 00:00:39.917 sys 0m0.000s 00:00:39.917 17:11:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:39.917 17:11:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.917 ************************************ 00:00:39.917 END TEST ubsan 00:00:39.917 ************************************ 00:00:39.917 17:11:18 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:00:39.917 17:11:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:39.918 17:11:18 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:39.918 17:11:18 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:00:39.918 17:11:18 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:39.918 17:11:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.918 ************************************ 00:00:39.918 START TEST build_native_dpdk 00:00:39.918 ************************************ 00:00:39.918 17:11:18 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:00:39.918 17:11:18 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:39.918 17:11:18 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:39.918 17:11:18 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:39.918 17:11:18 -- common/autobuild_common.sh@51 -- $ local compiler 00:00:39.918 17:11:18 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:39.918 17:11:18 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:39.918 17:11:18 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:39.918 17:11:18 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:39.918 17:11:18 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:39.918 17:11:18 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:39.918 17:11:18 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:39.918 17:11:18 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:39.918 17:11:18 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.918 17:11:18 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.918 17:11:18 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:39.918 17:11:18 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.918 17:11:18 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:39.918 eeb0605f11 version: 23.11.0 00:00:39.918 238778122a doc: update release notes for 23.11 00:00:39.918 46aa6b3cfc doc: fix description of RSS features 00:00:39.918 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:39.918 7e421ae345 devtools: support skipping forbid rule check 00:00:39.918 17:11:18 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:39.918 17:11:18 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:39.918 17:11:18 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:00:39.918 17:11:18 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:39.918 17:11:18 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:39.918 17:11:18 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:39.918 17:11:18 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:39.918 17:11:18 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:39.918 17:11:18 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:39.918 17:11:18 -- common/autobuild_common.sh@168 -- $ uname -s 00:00:39.918 17:11:18 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:39.918 17:11:18 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:00:39.918 17:11:18 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:00:39.918 17:11:18 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:00:39.918 17:11:18 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:00:39.918 17:11:18 -- scripts/common.sh@335 -- $ IFS=.-: 00:00:39.918 17:11:18 -- scripts/common.sh@335 -- $ read -ra ver1 00:00:39.918 17:11:18 -- scripts/common.sh@336 -- $ IFS=.-: 00:00:39.918 17:11:18 -- scripts/common.sh@336 -- $ read -ra ver2 00:00:39.918 17:11:18 -- scripts/common.sh@337 -- $ local 'op=<' 00:00:39.918 17:11:18 -- scripts/common.sh@339 -- $ ver1_l=3 00:00:39.918 17:11:18 -- scripts/common.sh@340 -- $ ver2_l=3 00:00:39.918 17:11:18 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:00:39.918 17:11:18 -- scripts/common.sh@343 -- $ case "$op" in 00:00:39.918 17:11:18 -- scripts/common.sh@344 -- $ : 1 00:00:39.918 17:11:18 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:00:39.918 17:11:18 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:39.918 17:11:18 -- scripts/common.sh@364 -- $ decimal 23 00:00:39.918 17:11:18 -- scripts/common.sh@352 -- $ local d=23 00:00:39.918 17:11:18 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:39.918 17:11:18 -- scripts/common.sh@354 -- $ echo 23 00:00:39.918 17:11:18 -- scripts/common.sh@364 -- $ ver1[v]=23 00:00:39.918 17:11:18 -- scripts/common.sh@365 -- $ decimal 21 00:00:39.918 17:11:18 -- scripts/common.sh@352 -- $ local d=21 00:00:39.918 17:11:18 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:39.918 17:11:18 -- scripts/common.sh@354 -- $ echo 21 00:00:39.918 17:11:18 -- scripts/common.sh@365 -- $ ver2[v]=21 00:00:39.918 17:11:18 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:00:39.918 17:11:18 -- scripts/common.sh@366 -- $ return 1 00:00:39.918 17:11:18 -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:39.918 patching file config/rte_config.h 00:00:39.918 Hunk #1 succeeded at 60 (offset 1 line). 00:00:39.918 17:11:18 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:39.918 17:11:18 -- common/autobuild_common.sh@178 -- $ uname -s 00:00:39.918 17:11:18 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:39.918 17:11:18 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:39.918 17:11:18 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:45.191 The Meson build system 00:00:45.191 Version: 1.3.1 00:00:45.191 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:45.191 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:45.191 Build type: native build 00:00:45.191 Program cat found: YES (/usr/bin/cat) 00:00:45.191 Project name: DPDK 00:00:45.191 Project version: 23.11.0 00:00:45.191 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:45.191 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:45.191 Host machine cpu family: x86_64 00:00:45.191 Host machine cpu: x86_64 00:00:45.191 Message: ## Building in Developer Mode ## 00:00:45.191 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:45.191 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:45.191 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:45.191 Program python3 found: YES (/usr/bin/python3) 00:00:45.191 Program cat found: YES (/usr/bin/cat) 00:00:45.191 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:45.191 Compiler for C supports arguments -march=native: YES 00:00:45.191 Checking for size of "void *" : 8 00:00:45.191 Checking for size of "void *" : 8 (cached) 00:00:45.191 Library m found: YES 00:00:45.191 Library numa found: YES 00:00:45.191 Has header "numaif.h" : YES 00:00:45.191 Library fdt found: NO 00:00:45.191 Library execinfo found: NO 00:00:45.191 Has header "execinfo.h" : YES 00:00:45.191 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:45.191 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:45.191 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:45.191 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:45.191 Run-time dependency openssl found: YES 3.0.9 00:00:45.191 Run-time dependency libpcap found: YES 1.10.4 00:00:45.191 Has header "pcap.h" with dependency libpcap: YES 00:00:45.192 Compiler for C supports arguments -Wcast-qual: YES 00:00:45.192 Compiler for C supports arguments -Wdeprecated: YES 00:00:45.192 Compiler for C supports arguments -Wformat: YES 00:00:45.192 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:45.192 Compiler for C supports arguments -Wformat-security: NO 00:00:45.192 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:45.192 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:45.192 Compiler for C supports arguments -Wnested-externs: YES 00:00:45.192 Compiler for C supports arguments -Wold-style-definition: YES 00:00:45.192 Compiler for C supports arguments -Wpointer-arith: YES 00:00:45.192 Compiler for C supports arguments -Wsign-compare: YES 00:00:45.192 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:45.192 Compiler for C supports arguments -Wundef: YES 00:00:45.192 Compiler for C supports arguments -Wwrite-strings: YES 00:00:45.192 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:45.192 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:45.192 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:45.192 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:45.192 Program objdump found: YES (/usr/bin/objdump) 00:00:45.192 Compiler for C supports arguments -mavx512f: YES 00:00:45.192 Checking if "AVX512 checking" compiles: YES 00:00:45.192 Fetching value of define "__SSE4_2__" : 1 00:00:45.192 Fetching value of define "__AES__" : 1 00:00:45.192 Fetching value of define "__AVX__" : 1 00:00:45.192 Fetching value of define "__AVX2__" : 1 00:00:45.192 Fetching value of define "__AVX512BW__" : 1 00:00:45.192 Fetching value of define "__AVX512CD__" : 1 00:00:45.192 Fetching value of define "__AVX512DQ__" : 1 00:00:45.192 Fetching value of define "__AVX512F__" : 1 00:00:45.192 Fetching value of define "__AVX512VL__" : 1 00:00:45.192 Fetching value of define "__PCLMUL__" : 1 00:00:45.192 Fetching value of define "__RDRND__" : 1 00:00:45.192 Fetching value of define "__RDSEED__" : 1 00:00:45.192 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:45.192 Fetching value of define "__znver1__" : (undefined) 00:00:45.192 Fetching value of define "__znver2__" : (undefined) 00:00:45.192 Fetching value of define "__znver3__" : (undefined) 00:00:45.192 Fetching value of define "__znver4__" : (undefined) 00:00:45.192 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:45.192 Message: lib/log: Defining dependency "log" 00:00:45.192 Message: lib/kvargs: Defining dependency "kvargs" 00:00:45.192 Message: lib/telemetry: Defining dependency "telemetry" 00:00:45.192 Checking for function "getentropy" : NO 00:00:45.192 Message: lib/eal: Defining dependency "eal" 00:00:45.192 Message: lib/ring: Defining dependency "ring" 00:00:45.192 Message: lib/rcu: Defining dependency "rcu" 00:00:45.192 Message: lib/mempool: Defining dependency "mempool" 00:00:45.192 Message: lib/mbuf: Defining dependency "mbuf" 00:00:45.192 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:45.192 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:00:45.192 Compiler for C supports arguments -mpclmul: YES 00:00:45.192 Compiler for C supports arguments -maes: YES 00:00:45.192 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:45.192 Compiler for C supports arguments -mavx512bw: YES 00:00:45.192 Compiler for C supports arguments -mavx512dq: YES 00:00:45.192 Compiler for C supports arguments -mavx512vl: YES 00:00:45.192 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:45.192 Compiler for C supports arguments -mavx2: YES 00:00:45.192 Compiler for C supports arguments -mavx: YES 00:00:45.192 Message: lib/net: Defining dependency "net" 00:00:45.192 Message: lib/meter: Defining dependency "meter" 00:00:45.192 Message: lib/ethdev: Defining dependency "ethdev" 00:00:45.192 Message: lib/pci: Defining dependency "pci" 00:00:45.192 Message: lib/cmdline: Defining dependency "cmdline" 00:00:45.192 Message: lib/metrics: Defining dependency "metrics" 00:00:45.192 Message: lib/hash: Defining dependency "hash" 00:00:45.192 Message: lib/timer: Defining dependency "timer" 00:00:45.192 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512CD__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:45.192 Message: lib/acl: Defining dependency "acl" 00:00:45.192 Message: lib/bbdev: Defining dependency "bbdev" 00:00:45.192 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:45.192 Run-time dependency libelf found: YES 0.190 00:00:45.192 Message: lib/bpf: Defining dependency "bpf" 00:00:45.192 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:45.192 Message: lib/compressdev: Defining dependency "compressdev" 00:00:45.192 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:45.192 Message: lib/distributor: Defining dependency "distributor" 00:00:45.192 Message: lib/dmadev: Defining dependency "dmadev" 00:00:45.192 Message: lib/efd: Defining dependency "efd" 00:00:45.192 Message: lib/eventdev: Defining dependency "eventdev" 00:00:45.192 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:45.192 Message: lib/gpudev: Defining dependency "gpudev" 00:00:45.192 Message: lib/gro: Defining dependency "gro" 00:00:45.192 Message: lib/gso: Defining dependency "gso" 00:00:45.192 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:45.192 Message: lib/jobstats: Defining dependency "jobstats" 00:00:45.192 Message: lib/latencystats: Defining dependency "latencystats" 00:00:45.192 Message: lib/lpm: Defining dependency "lpm" 00:00:45.192 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:45.192 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:45.192 Message: lib/member: Defining dependency "member" 00:00:45.192 Message: lib/pcapng: Defining dependency "pcapng" 00:00:45.192 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:45.192 Message: lib/power: Defining dependency "power" 00:00:45.192 Message: lib/rawdev: Defining dependency "rawdev" 00:00:45.192 Message: lib/regexdev: Defining dependency "regexdev" 00:00:45.192 Message: lib/mldev: Defining dependency "mldev" 00:00:45.192 Message: lib/rib: Defining dependency "rib" 00:00:45.192 Message: lib/reorder: Defining dependency "reorder" 00:00:45.192 Message: lib/sched: Defining dependency "sched" 00:00:45.192 Message: lib/security: Defining dependency "security" 00:00:45.192 Message: lib/stack: Defining dependency "stack" 00:00:45.192 Has header "linux/userfaultfd.h" : YES 00:00:45.192 Has header "linux/vduse.h" : YES 00:00:45.192 Message: lib/vhost: Defining dependency "vhost" 00:00:45.192 Message: lib/ipsec: Defining dependency "ipsec" 00:00:45.192 Message: lib/pdcp: Defining dependency "pdcp" 00:00:45.192 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:45.192 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:45.192 Message: lib/fib: Defining dependency "fib" 00:00:45.192 Message: lib/port: Defining dependency "port" 00:00:45.192 Message: lib/pdump: Defining dependency "pdump" 00:00:45.192 Message: lib/table: Defining dependency "table" 00:00:45.192 Message: lib/pipeline: Defining dependency "pipeline" 00:00:45.192 Message: lib/graph: Defining dependency "graph" 00:00:45.192 Message: lib/node: Defining dependency "node" 00:00:45.192 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:46.583 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:46.583 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:46.583 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:46.583 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:46.583 Compiler for C supports arguments -Wno-unused-value: YES 00:00:46.583 Compiler for C supports arguments -Wno-format: YES 00:00:46.583 Compiler for C supports arguments -Wno-format-security: YES 00:00:46.583 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:46.583 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:46.583 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:46.583 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:46.583 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:46.583 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:46.583 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:46.583 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:46.583 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:46.583 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:46.583 Has header "sys/epoll.h" : YES 00:00:46.583 Program doxygen found: YES (/usr/bin/doxygen) 00:00:46.583 Configuring doxy-api-html.conf using configuration 00:00:46.583 Configuring doxy-api-man.conf using configuration 00:00:46.583 Program mandb found: YES (/usr/bin/mandb) 00:00:46.583 Program sphinx-build found: NO 00:00:46.583 Configuring rte_build_config.h using configuration 00:00:46.583 Message: 00:00:46.583 ================= 00:00:46.583 Applications Enabled 00:00:46.583 ================= 00:00:46.583 00:00:46.583 apps: 00:00:46.583 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:46.583 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:46.583 test-pmd, test-regex, test-sad, test-security-perf, 00:00:46.583 00:00:46.583 Message: 00:00:46.583 ================= 00:00:46.583 Libraries Enabled 00:00:46.583 ================= 00:00:46.583 00:00:46.583 libs: 00:00:46.583 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:46.583 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:00:46.583 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:00:46.583 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:00:46.583 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:00:46.583 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:00:46.583 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:00:46.583 00:00:46.583 00:00:46.583 Message: 00:00:46.583 =============== 00:00:46.583 Drivers Enabled 00:00:46.583 =============== 00:00:46.583 00:00:46.584 common: 00:00:46.584 00:00:46.584 bus: 00:00:46.584 pci, vdev, 00:00:46.584 mempool: 00:00:46.584 ring, 00:00:46.584 dma: 00:00:46.584 00:00:46.584 net: 00:00:46.584 i40e, 00:00:46.584 raw: 00:00:46.584 00:00:46.584 crypto: 00:00:46.584 00:00:46.584 compress: 00:00:46.584 00:00:46.584 regex: 00:00:46.584 00:00:46.584 ml: 00:00:46.584 00:00:46.584 vdpa: 00:00:46.584 00:00:46.584 event: 00:00:46.584 00:00:46.584 baseband: 00:00:46.584 00:00:46.584 gpu: 00:00:46.584 00:00:46.584 00:00:46.584 Message: 00:00:46.584 ================= 00:00:46.584 Content Skipped 00:00:46.584 ================= 00:00:46.584 00:00:46.584 apps: 00:00:46.584 00:00:46.584 libs: 00:00:46.584 00:00:46.584 drivers: 00:00:46.584 common/cpt: not in enabled drivers build config 00:00:46.584 common/dpaax: not in enabled drivers build config 00:00:46.584 common/iavf: not in enabled drivers build config 00:00:46.584 common/idpf: not in enabled drivers build config 00:00:46.584 common/mvep: not in enabled drivers build config 00:00:46.584 common/octeontx: not in enabled drivers build config 00:00:46.584 bus/auxiliary: not in enabled drivers build config 00:00:46.584 bus/cdx: not in enabled drivers build config 00:00:46.584 bus/dpaa: not in enabled drivers build config 00:00:46.584 bus/fslmc: not in enabled drivers build config 00:00:46.584 bus/ifpga: not in enabled drivers build config 00:00:46.584 bus/platform: not in enabled drivers build config 00:00:46.584 bus/vmbus: not in enabled drivers build config 00:00:46.584 common/cnxk: not in enabled drivers build config 00:00:46.584 common/mlx5: not in enabled drivers build config 00:00:46.584 common/nfp: not in enabled drivers build config 00:00:46.584 common/qat: not in enabled drivers build config 00:00:46.584 common/sfc_efx: not in enabled drivers build config 00:00:46.584 mempool/bucket: not in enabled drivers build config 00:00:46.584 mempool/cnxk: not in enabled drivers build config 00:00:46.584 mempool/dpaa: not in enabled drivers build config 00:00:46.584 mempool/dpaa2: not in enabled drivers build config 00:00:46.584 mempool/octeontx: not in enabled drivers build config 00:00:46.584 mempool/stack: not in enabled drivers build config 00:00:46.584 dma/cnxk: not in enabled drivers build config 00:00:46.584 dma/dpaa: not in enabled drivers build config 00:00:46.584 dma/dpaa2: not in enabled drivers build config 00:00:46.584 dma/hisilicon: not in enabled drivers build config 00:00:46.584 dma/idxd: not in enabled drivers build config 00:00:46.584 dma/ioat: not in enabled drivers build config 00:00:46.584 dma/skeleton: not in enabled drivers build config 00:00:46.584 net/af_packet: not in enabled drivers build config 00:00:46.584 net/af_xdp: not in enabled drivers build config 00:00:46.584 net/ark: not in enabled drivers build config 00:00:46.584 net/atlantic: not in enabled drivers build config 00:00:46.584 net/avp: not in enabled drivers build config 00:00:46.584 net/axgbe: not in enabled drivers build config 00:00:46.584 net/bnx2x: not in enabled drivers build config 00:00:46.584 net/bnxt: not in enabled drivers build config 00:00:46.584 net/bonding: not in enabled drivers build config 00:00:46.584 net/cnxk: not in enabled drivers build config 00:00:46.584 net/cpfl: not in enabled drivers build config 00:00:46.584 net/cxgbe: not in enabled drivers build config 00:00:46.584 net/dpaa: not in enabled drivers build config 00:00:46.584 net/dpaa2: not in enabled drivers build config 00:00:46.584 net/e1000: not in enabled drivers build config 00:00:46.584 net/ena: not in enabled drivers build config 00:00:46.584 net/enetc: not in enabled drivers build config 00:00:46.584 net/enetfec: not in enabled drivers build config 00:00:46.584 net/enic: not in enabled drivers build config 00:00:46.584 net/failsafe: not in enabled drivers build config 00:00:46.584 net/fm10k: not in enabled drivers build config 00:00:46.584 net/gve: not in enabled drivers build config 00:00:46.584 net/hinic: not in enabled drivers build config 00:00:46.584 net/hns3: not in enabled drivers build config 00:00:46.584 net/iavf: not in enabled drivers build config 00:00:46.584 net/ice: not in enabled drivers build config 00:00:46.584 net/idpf: not in enabled drivers build config 00:00:46.584 net/igc: not in enabled drivers build config 00:00:46.584 net/ionic: not in enabled drivers build config 00:00:46.584 net/ipn3ke: not in enabled drivers build config 00:00:46.584 net/ixgbe: not in enabled drivers build config 00:00:46.584 net/mana: not in enabled drivers build config 00:00:46.584 net/memif: not in enabled drivers build config 00:00:46.584 net/mlx4: not in enabled drivers build config 00:00:46.584 net/mlx5: not in enabled drivers build config 00:00:46.584 net/mvneta: not in enabled drivers build config 00:00:46.584 net/mvpp2: not in enabled drivers build config 00:00:46.584 net/netvsc: not in enabled drivers build config 00:00:46.584 net/nfb: not in enabled drivers build config 00:00:46.584 net/nfp: not in enabled drivers build config 00:00:46.584 net/ngbe: not in enabled drivers build config 00:00:46.584 net/null: not in enabled drivers build config 00:00:46.584 net/octeontx: not in enabled drivers build config 00:00:46.584 net/octeon_ep: not in enabled drivers build config 00:00:46.584 net/pcap: not in enabled drivers build config 00:00:46.584 net/pfe: not in enabled drivers build config 00:00:46.584 net/qede: not in enabled drivers build config 00:00:46.584 net/ring: not in enabled drivers build config 00:00:46.584 net/sfc: not in enabled drivers build config 00:00:46.584 net/softnic: not in enabled drivers build config 00:00:46.584 net/tap: not in enabled drivers build config 00:00:46.584 net/thunderx: not in enabled drivers build config 00:00:46.584 net/txgbe: not in enabled drivers build config 00:00:46.584 net/vdev_netvsc: not in enabled drivers build config 00:00:46.584 net/vhost: not in enabled drivers build config 00:00:46.584 net/virtio: not in enabled drivers build config 00:00:46.584 net/vmxnet3: not in enabled drivers build config 00:00:46.584 raw/cnxk_bphy: not in enabled drivers build config 00:00:46.584 raw/cnxk_gpio: not in enabled drivers build config 00:00:46.584 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:46.584 raw/ifpga: not in enabled drivers build config 00:00:46.584 raw/ntb: not in enabled drivers build config 00:00:46.584 raw/skeleton: not in enabled drivers build config 00:00:46.584 crypto/armv8: not in enabled drivers build config 00:00:46.584 crypto/bcmfs: not in enabled drivers build config 00:00:46.584 crypto/caam_jr: not in enabled drivers build config 00:00:46.584 crypto/ccp: not in enabled drivers build config 00:00:46.584 crypto/cnxk: not in enabled drivers build config 00:00:46.584 crypto/dpaa_sec: not in enabled drivers build config 00:00:46.584 crypto/dpaa2_sec: not in enabled drivers build config 00:00:46.584 crypto/ipsec_mb: not in enabled drivers build config 00:00:46.584 crypto/mlx5: not in enabled drivers build config 00:00:46.584 crypto/mvsam: not in enabled drivers build config 00:00:46.584 crypto/nitrox: not in enabled drivers build config 00:00:46.584 crypto/null: not in enabled drivers build config 00:00:46.584 crypto/octeontx: not in enabled drivers build config 00:00:46.584 crypto/openssl: not in enabled drivers build config 00:00:46.584 crypto/scheduler: not in enabled drivers build config 00:00:46.584 crypto/uadk: not in enabled drivers build config 00:00:46.584 crypto/virtio: not in enabled drivers build config 00:00:46.584 compress/isal: not in enabled drivers build config 00:00:46.584 compress/mlx5: not in enabled drivers build config 00:00:46.584 compress/octeontx: not in enabled drivers build config 00:00:46.584 compress/zlib: not in enabled drivers build config 00:00:46.584 regex/mlx5: not in enabled drivers build config 00:00:46.584 regex/cn9k: not in enabled drivers build config 00:00:46.584 ml/cnxk: not in enabled drivers build config 00:00:46.584 vdpa/ifc: not in enabled drivers build config 00:00:46.584 vdpa/mlx5: not in enabled drivers build config 00:00:46.584 vdpa/nfp: not in enabled drivers build config 00:00:46.584 vdpa/sfc: not in enabled drivers build config 00:00:46.584 event/cnxk: not in enabled drivers build config 00:00:46.584 event/dlb2: not in enabled drivers build config 00:00:46.584 event/dpaa: not in enabled drivers build config 00:00:46.584 event/dpaa2: not in enabled drivers build config 00:00:46.584 event/dsw: not in enabled drivers build config 00:00:46.584 event/opdl: not in enabled drivers build config 00:00:46.584 event/skeleton: not in enabled drivers build config 00:00:46.584 event/sw: not in enabled drivers build config 00:00:46.584 event/octeontx: not in enabled drivers build config 00:00:46.584 baseband/acc: not in enabled drivers build config 00:00:46.584 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:46.584 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:46.584 baseband/la12xx: not in enabled drivers build config 00:00:46.584 baseband/null: not in enabled drivers build config 00:00:46.584 baseband/turbo_sw: not in enabled drivers build config 00:00:46.584 gpu/cuda: not in enabled drivers build config 00:00:46.584 00:00:46.584 00:00:46.584 Build targets in project: 217 00:00:46.584 00:00:46.584 DPDK 23.11.0 00:00:46.584 00:00:46.584 User defined options 00:00:46.584 libdir : lib 00:00:46.584 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.584 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:46.584 c_link_args : 00:00:46.584 enable_docs : false 00:00:46.584 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:46.584 enable_kmods : false 00:00:46.584 machine : native 00:00:46.584 tests : false 00:00:46.584 00:00:46.584 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:46.584 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:46.584 17:11:25 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j112 00:00:46.584 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:46.854 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:46.854 [2/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:46.854 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:46.854 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:46.854 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:46.854 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:46.854 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:46.854 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:46.854 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:46.854 [10/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:46.854 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:46.854 [12/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:46.854 [13/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:46.854 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:46.854 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:46.854 [16/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:46.854 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:47.116 [18/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:47.116 [19/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:47.116 [20/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:47.116 [21/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:47.116 [22/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:47.116 [23/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:47.116 [24/707] Linking static target lib/librte_kvargs.a 00:00:47.116 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:47.116 [26/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:47.116 [27/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:47.116 [28/707] Linking static target lib/librte_log.a 00:00:47.116 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:47.116 [30/707] Linking static target lib/librte_pci.a 00:00:47.116 [31/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:47.116 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:47.116 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:47.116 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:47.376 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:47.376 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:47.376 [37/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:47.376 [38/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:47.376 [39/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:47.376 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:47.640 [41/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:47.640 [42/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:47.640 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:47.640 [44/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:47.640 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:47.640 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:47.641 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:47.641 [48/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:47.641 [49/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:47.641 [50/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:47.641 [51/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:47.641 [52/707] Linking static target lib/librte_ring.a 00:00:47.641 [53/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:47.641 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:47.641 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:47.641 [56/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:47.641 [57/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:47.641 [58/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:47.641 [59/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:47.641 [60/707] Linking static target lib/librte_meter.a 00:00:47.641 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:47.641 [62/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:47.641 [63/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:47.641 [64/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:47.641 [65/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:47.641 [66/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:47.641 [67/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:47.641 [68/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:47.641 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:47.641 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:47.641 [71/707] Linking static target lib/librte_cmdline.a 00:00:47.641 [72/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:47.641 [73/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:47.641 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:47.902 [75/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:47.903 [76/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:47.903 [77/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:47.903 [78/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:47.903 [79/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:47.903 [80/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:47.903 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:47.903 [82/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:47.903 [83/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:47.903 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:47.903 [85/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:47.903 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:47.903 [87/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:47.903 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:47.903 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:47.903 [90/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:47.903 [91/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:47.903 [92/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:47.903 [93/707] Linking static target lib/librte_metrics.a 00:00:47.903 [94/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:47.903 [95/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:47.903 [96/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:47.903 [97/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:47.903 [98/707] Linking target lib/librte_log.so.24.0 00:00:47.903 [99/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:47.903 [100/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:47.903 [101/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:47.903 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:47.903 [103/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:47.903 [104/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:47.903 [105/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:47.903 [106/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:47.903 [107/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:47.903 [108/707] Linking static target lib/librte_cfgfile.a 00:00:48.162 [109/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:48.163 [110/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.163 [111/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.163 [112/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:48.163 [113/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:48.163 [114/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:48.163 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:48.163 [116/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:48.163 [117/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:48.163 [118/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:48.163 [119/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:00:48.163 [120/707] Linking static target lib/librte_net.a 00:00:48.163 [121/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:48.163 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:48.163 [123/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:48.163 [124/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:48.163 [125/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:48.163 [126/707] Linking static target lib/librte_bitratestats.a 00:00:48.163 [127/707] Linking target lib/librte_kvargs.so.24.0 00:00:48.163 [128/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:48.163 [129/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:48.163 [130/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:48.163 [131/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:48.163 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:48.424 [133/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:48.424 [134/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:48.424 [135/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:48.424 [136/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:48.424 [137/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:48.424 [138/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:48.424 [139/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:48.424 [140/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:48.424 [141/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:48.424 [142/707] Linking static target lib/librte_compressdev.a 00:00:48.424 [143/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:48.424 [144/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:00:48.424 [145/707] Linking static target lib/librte_timer.a 00:00:48.424 [146/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:48.424 [147/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:48.424 [148/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:48.424 [149/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:48.424 [150/707] Linking static target lib/librte_mempool.a 00:00:48.424 [151/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:48.424 [152/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.424 [153/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.424 [154/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:48.424 [155/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:48.685 [156/707] Linking static target lib/librte_bbdev.a 00:00:48.685 [157/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.685 [158/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:48.685 [159/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:48.685 [160/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:48.685 [161/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.685 [162/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:48.685 [163/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:48.685 [164/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:48.685 [165/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:48.685 [166/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:48.685 [167/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:48.685 [168/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:48.685 [169/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:48.685 [170/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:48.685 [171/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:48.685 [172/707] Linking static target lib/librte_jobstats.a 00:00:48.685 [173/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:48.685 [174/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:48.685 [175/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:48.685 [176/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:48.685 [177/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:48.685 [178/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:48.685 [179/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:48.685 [180/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:48.946 [181/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:48.946 [182/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:48.946 [183/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:00:48.946 [184/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:48.946 [185/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:48.946 [186/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:48.946 [187/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:00:48.946 [188/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:48.946 [189/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:00:48.946 [190/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:48.946 [191/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:48.946 [192/707] Linking static target lib/librte_dispatcher.a 00:00:48.946 [193/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:48.946 [194/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:48.946 [195/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:00:48.946 [196/707] Linking static target lib/librte_latencystats.a 00:00:48.946 [197/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.946 [198/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:00:48.946 [199/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:48.946 [200/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:48.946 [201/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:48.946 [202/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:48.946 [203/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:48.946 [204/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:48.946 [205/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:48.946 [206/707] Linking static target lib/librte_rcu.a 00:00:48.946 [207/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:48.946 [208/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:48.946 [209/707] Linking static target lib/librte_gpudev.a 00:00:48.946 [210/707] Linking static target lib/librte_gro.a 00:00:48.946 [211/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:48.946 [212/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:48.946 [213/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:48.946 [214/707] Linking static target lib/librte_telemetry.a 00:00:49.210 [215/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.210 [216/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:49.210 [217/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:49.210 [218/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:49.210 [219/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:49.210 [220/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:49.210 [221/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:49.210 [222/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:49.210 [223/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:49.210 [224/707] Linking static target lib/librte_stack.a 00:00:49.210 [225/707] Linking static target lib/librte_gso.a 00:00:49.210 [226/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:49.210 [227/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:49.210 [228/707] Linking static target lib/librte_dmadev.a 00:00:49.210 [229/707] Linking static target lib/librte_eal.a 00:00:49.210 [230/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:49.210 [231/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:49.210 [232/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:49.210 [233/707] Linking static target lib/librte_distributor.a 00:00:49.210 [234/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:49.210 [235/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.210 [236/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:49.210 [237/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.210 [238/707] Linking static target lib/librte_regexdev.a 00:00:49.210 [239/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:00:49.210 [240/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:00:49.210 [241/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:49.210 [242/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:00:49.210 [243/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:00:49.210 [244/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.210 [245/707] Linking static target lib/librte_ip_frag.a 00:00:49.210 [246/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:00:49.210 [247/707] Linking static target lib/librte_mldev.a 00:00:49.472 [248/707] Linking static target lib/librte_rawdev.a 00:00:49.472 [249/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:49.472 [250/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:49.472 [251/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:00:49.472 [252/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:00:49.472 [253/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:00:49.472 [254/707] Linking static target lib/librte_mbuf.a 00:00:49.472 [255/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:49.472 [256/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:00:49.472 [257/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:49.472 [258/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:00:49.472 [259/707] Linking static target lib/librte_power.a 00:00:49.472 [260/707] Linking static target lib/librte_pcapng.a 00:00:49.472 [261/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:00:49.472 [262/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.472 [263/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:00:49.472 [264/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.472 [265/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:49.472 [266/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:00:49.472 [267/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:49.472 [268/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:00:49.472 [269/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:49.472 [270/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.472 [271/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.472 [272/707] Linking static target lib/librte_bpf.a 00:00:49.472 [273/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:00:49.472 [274/707] Linking static target lib/librte_reorder.a 00:00:49.472 [275/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.472 [276/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.472 [277/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:00:49.730 [278/707] Linking static target lib/librte_security.a 00:00:49.730 [279/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:49.730 [280/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:00:49.730 [281/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.730 [282/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.730 [283/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:00:49.730 [284/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:00:49.730 [285/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:00:49.730 [286/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:00:49.730 [287/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:00:49.730 [288/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.730 [289/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.730 [290/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:00:49.730 [291/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:00:49.730 [292/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.730 [293/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:00:49.730 [294/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:00:49.730 [295/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.730 [296/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:00:49.730 [297/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:00:49.730 [298/707] Linking target lib/librte_telemetry.so.24.0 00:00:49.993 [299/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:00:49.993 [300/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:00:49.993 [301/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:00:49.993 [302/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:00:49.993 [303/707] Linking static target lib/librte_lpm.a 00:00:49.993 [304/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:00:49.993 [305/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:00:49.993 [306/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:00:49.993 [307/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.993 [308/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:00:49.993 [309/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:00:49.993 [310/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:00:49.993 [311/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:00:49.993 [312/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:00:49.993 [313/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.993 [314/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:00:49.993 [315/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:00:49.993 [316/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:00:49.993 [317/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.993 [318/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:00:50.253 [319/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:00:50.253 [320/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:00:50.253 [321/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:00:50.253 [322/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:00:50.253 [323/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:00:50.253 [324/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:00:50.253 [325/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:50.253 [326/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:00:50.253 [327/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:00:50.253 [328/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:00:50.253 [329/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:00:50.253 [330/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:00:50.253 [331/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:00:50.253 [332/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.253 [333/707] Linking static target lib/librte_efd.a 00:00:50.253 [334/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:00:50.253 [335/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.253 [336/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:00:50.253 [337/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:00:50.253 [338/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:00:50.253 [339/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:00:50.253 [340/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:00:50.513 [341/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:00:50.513 [342/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:00:50.513 [343/707] Linking static target lib/librte_rib.a 00:00:50.513 [344/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:00:50.513 [345/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:00:50.513 [346/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:00:50.513 [347/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.513 [348/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:00:50.513 [349/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:00:50.513 [350/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:00:50.513 [351/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.513 [352/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.513 [353/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:00:50.513 [354/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.513 [355/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:00:50.513 [356/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:50.513 [357/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:00:50.513 [358/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:00:50.779 [359/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:00:50.779 [360/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:00:50.779 [361/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:00:50.779 [362/707] Linking static target lib/librte_fib.a 00:00:50.779 [363/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.779 [364/707] Linking static target lib/librte_pdump.a 00:00:50.779 [365/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:00:50.779 [366/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:00:50.779 [367/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:00:50.779 [368/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:00:50.779 [369/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:00:50.779 [370/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:00:50.779 [371/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:00:50.779 [372/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:00:50.779 [373/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:00:50.779 [374/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:00:50.779 [375/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:00:50.779 [376/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:00:50.779 [377/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:00:50.779 [378/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:00:50.779 [379/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:00:50.779 [380/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:00:51.045 [381/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:00:51.045 [382/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:00:51.045 [383/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:00:51.045 [384/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:00:51.045 [385/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:00:51.045 [386/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:00:51.045 [387/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:00:51.045 [388/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:00:51.045 [389/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:00:51.045 [390/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:00:51.045 [391/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:00:51.045 [392/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:00:51.045 [393/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:00:51.045 [394/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:00:51.045 [395/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:00:51.045 [396/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:00:51.045 [397/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:00:51.045 [398/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:00:51.045 [399/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:00:51.045 [400/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:00:51.045 [401/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.045 [402/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:00:51.045 [403/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.045 [404/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:00:51.045 [405/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:00:51.045 [406/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:00:51.307 [407/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:51.307 [408/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:00:51.307 [409/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:00:51.307 [410/707] Linking static target drivers/librte_bus_vdev.a 00:00:51.307 [411/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.307 [412/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:51.307 [413/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:00:51.307 [414/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:00:51.307 [415/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:00:51.307 [416/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:00:51.307 [417/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:00:51.307 [418/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:00:51.307 [419/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:00:51.307 [420/707] Linking static target lib/librte_table.a 00:00:51.307 [421/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:00:51.307 [422/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:00:51.307 [423/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:00:51.567 [424/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:00:51.567 [425/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:00:51.567 [426/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:00:51.567 [427/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:00:51.567 [428/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:51.567 [429/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:00:51.567 [430/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.567 [431/707] Linking static target lib/librte_sched.a 00:00:51.567 [432/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:00:51.567 [433/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:51.567 [434/707] Linking static target drivers/librte_bus_pci.a 00:00:51.567 [435/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:00:51.567 [436/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:00:51.567 [437/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:00:51.567 [438/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.567 [439/707] Linking static target lib/librte_eventdev.a 00:00:51.567 [440/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:00:51.567 [441/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:00:51.567 [442/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:00:51.567 [443/707] Linking static target lib/librte_graph.a 00:00:51.567 [444/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:00:51.567 [445/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:00:51.567 [446/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:00:51.567 [447/707] Linking static target lib/librte_cryptodev.a 00:00:51.827 [448/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:00:51.827 [449/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:00:51.827 [450/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:00:51.827 [451/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:00:51.827 [452/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:00:51.827 [453/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:00:51.827 [454/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:00:51.827 [455/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:00:51.827 [456/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:00:51.827 [457/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:00:51.827 [458/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:00:51.827 [459/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:00:51.827 [460/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:00:51.827 [461/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:00:51.827 [462/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:00:51.827 [463/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:00:51.827 [464/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:00:51.827 [465/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:00:51.827 [466/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:00:51.827 [467/707] Linking static target lib/librte_member.a 00:00:51.827 [468/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:00:51.827 [469/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:00:51.827 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:00:51.827 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:00:51.827 [472/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:00:51.827 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:00:52.086 [474/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:00:52.086 [475/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:00:52.086 [476/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:00:52.086 [477/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:00:52.086 [478/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:00:52.086 [479/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:00:52.086 [480/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:00:52.086 [481/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:00:52.086 [482/707] Linking static target lib/librte_ipsec.a 00:00:52.086 [483/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:00:52.086 [484/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:00:52.086 [485/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:00:52.086 [486/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:00:52.086 [487/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.086 [488/707] Linking static target lib/acl/libavx2_tmp.a 00:00:52.086 [489/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:00:52.086 [490/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:00:52.086 [491/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:00:52.086 [492/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:00:52.086 [493/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:00:52.086 [494/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:00:52.086 [495/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:00:52.086 [496/707] Linking static target lib/librte_pdcp.a 00:00:52.086 [497/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:00:52.086 [498/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:00:52.086 [499/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:00:52.086 [500/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:00:52.086 [501/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:00:52.345 [502/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:00:52.345 [503/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:52.345 [504/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:52.345 [505/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.345 [506/707] Linking static target drivers/librte_mempool_ring.a 00:00:52.345 [507/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:00:52.345 [508/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:00:52.345 [509/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:00:52.345 [510/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.345 [511/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:00:52.345 [512/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:00:52.345 [513/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:00:52.345 [514/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:00:52.345 [515/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:00:52.345 [516/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.345 [517/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:00:52.345 [518/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:00:52.346 [519/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:00:52.346 [520/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:00:52.346 [521/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:00:52.346 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:00:52.346 [523/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:00:52.346 [524/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:00:52.346 [525/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.346 [526/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.346 [527/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:00:52.604 [528/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:00:52.604 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:00:52.604 [530/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:00:52.604 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:00:52.604 [532/707] Linking static target lib/librte_port.a 00:00:52.604 [533/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:00:52.604 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:00:52.604 [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:00:52.604 [536/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:00:52.604 [537/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:00:52.604 [538/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.604 [539/707] Linking static target lib/librte_acl.a 00:00:52.604 [540/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:00:52.604 [541/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:00:52.604 [542/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:52.604 [543/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:00:52.604 [544/707] Linking static target lib/librte_hash.a 00:00:52.604 [545/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:00:52.604 [546/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:00:52.604 [547/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:00:52.604 [548/707] Linking static target lib/librte_node.a 00:00:52.861 [549/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:00:52.861 [550/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:00:52.861 [551/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:00:52.861 [552/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:00:52.861 [553/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:00:52.861 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:00:52.861 [555/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:00:52.861 [556/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:00:53.120 [557/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.120 [558/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:00:53.120 [559/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:00:53.120 [560/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:00:53.120 [561/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:00:53.120 [562/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:00:53.120 [563/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:00:53.120 [564/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.120 [565/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:00:53.120 [566/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.378 [567/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:00:53.378 [568/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:00:53.378 [569/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.378 [570/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:00:53.378 [571/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.636 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:00:54.201 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:00:54.201 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:00:54.201 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:00:54.201 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:00:54.765 [577/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.765 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:00:54.765 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:00:55.700 [580/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:00:55.962 [581/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:00:56.236 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:00:56.236 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:00:56.517 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:00:56.517 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:00:56.517 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:00:56.517 [587/707] Linking static target drivers/librte_net_i40e.a 00:00:56.783 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:00:57.717 [589/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.092 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:00:59.092 [591/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.352 [592/707] Linking target lib/librte_eal.so.24.0 00:00:59.352 [593/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:00:59.352 [594/707] Linking target lib/librte_pci.so.24.0 00:00:59.352 [595/707] Linking target lib/librte_cfgfile.so.24.0 00:00:59.352 [596/707] Linking target lib/librte_meter.so.24.0 00:00:59.352 [597/707] Linking target lib/librte_timer.so.24.0 00:00:59.352 [598/707] Linking target lib/librte_dmadev.so.24.0 00:00:59.352 [599/707] Linking target lib/librte_ring.so.24.0 00:00:59.352 [600/707] Linking target drivers/librte_bus_vdev.so.24.0 00:00:59.352 [601/707] Linking target lib/librte_stack.so.24.0 00:00:59.352 [602/707] Linking target lib/librte_jobstats.so.24.0 00:00:59.352 [603/707] Linking target lib/librte_rawdev.so.24.0 00:00:59.352 [604/707] Linking target lib/librte_acl.so.24.0 00:00:59.610 [605/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:00:59.610 [606/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:00:59.610 [607/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:00:59.610 [608/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:00:59.610 [609/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:00:59.610 [610/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:00:59.610 [611/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:00:59.610 [612/707] Linking target drivers/librte_bus_pci.so.24.0 00:00:59.610 [613/707] Linking target lib/librte_rcu.so.24.0 00:00:59.610 [614/707] Linking target lib/librte_mempool.so.24.0 00:00:59.869 [615/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:00:59.869 [616/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:00:59.869 [617/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:00:59.869 [618/707] Linking target drivers/librte_mempool_ring.so.24.0 00:00:59.869 [619/707] Linking target lib/librte_rib.so.24.0 00:00:59.869 [620/707] Linking target lib/librte_mbuf.so.24.0 00:01:00.128 [621/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:00.128 [622/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:00.128 [623/707] Linking target lib/librte_regexdev.so.24.0 00:01:00.128 [624/707] Linking target lib/librte_gpudev.so.24.0 00:01:00.128 [625/707] Linking target lib/librte_bbdev.so.24.0 00:01:00.128 [626/707] Linking target lib/librte_distributor.so.24.0 00:01:00.128 [627/707] Linking target lib/librte_net.so.24.0 00:01:00.128 [628/707] Linking target lib/librte_compressdev.so.24.0 00:01:00.128 [629/707] Linking target lib/librte_mldev.so.24.0 00:01:00.128 [630/707] Linking target lib/librte_reorder.so.24.0 00:01:00.128 [631/707] Linking target lib/librte_sched.so.24.0 00:01:00.128 [632/707] Linking target lib/librte_cryptodev.so.24.0 00:01:00.128 [633/707] Linking target lib/librte_fib.so.24.0 00:01:00.128 [634/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:00.128 [635/707] Linking static target lib/librte_ethdev.a 00:01:00.387 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:00.387 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:00.387 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:00.387 [639/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:00.387 [640/707] Linking target lib/librte_security.so.24.0 00:01:00.387 [641/707] Linking target lib/librte_hash.so.24.0 00:01:00.387 [642/707] Linking target lib/librte_cmdline.so.24.0 00:01:00.387 [643/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:00.387 [644/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:00.646 [645/707] Linking target lib/librte_pdcp.so.24.0 00:01:00.646 [646/707] Linking target lib/librte_ipsec.so.24.0 00:01:00.646 [647/707] Linking target lib/librte_efd.so.24.0 00:01:00.646 [648/707] Linking target lib/librte_lpm.so.24.0 00:01:00.646 [649/707] Linking target lib/librte_member.so.24.0 00:01:00.646 [650/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:00.646 [651/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:07.243 [652/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:07.243 [653/707] Linking static target lib/librte_pipeline.a 00:01:07.501 [654/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.501 [655/707] Linking target lib/librte_ethdev.so.24.0 00:01:07.758 [656/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:07.758 [657/707] Linking target lib/librte_gso.so.24.0 00:01:07.758 [658/707] Linking target lib/librte_metrics.so.24.0 00:01:07.758 [659/707] Linking target lib/librte_pcapng.so.24.0 00:01:07.758 [660/707] Linking target lib/librte_gro.so.24.0 00:01:07.758 [661/707] Linking target lib/librte_ip_frag.so.24.0 00:01:07.758 [662/707] Linking target lib/librte_bpf.so.24.0 00:01:07.758 [663/707] Linking target lib/librte_power.so.24.0 00:01:07.758 [664/707] Linking target lib/librte_eventdev.so.24.0 00:01:07.758 [665/707] Linking target drivers/librte_net_i40e.so.24.0 00:01:08.017 [666/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:08.017 [667/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:08.017 [668/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:08.017 [669/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:08.017 [670/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:08.017 [671/707] Linking target lib/librte_latencystats.so.24.0 00:01:08.017 [672/707] Linking target lib/librte_graph.so.24.0 00:01:08.017 [673/707] Linking target lib/librte_bitratestats.so.24.0 00:01:08.017 [674/707] Linking target lib/librte_pdump.so.24.0 00:01:08.017 [675/707] Linking target lib/librte_dispatcher.so.24.0 00:01:08.017 [676/707] Linking target lib/librte_port.so.24.0 00:01:08.276 [677/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:08.276 [678/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:08.276 [679/707] Linking target lib/librte_node.so.24.0 00:01:08.276 [680/707] Linking target lib/librte_table.so.24.0 00:01:08.534 [681/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:09.912 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:09.912 [683/707] Linking static target lib/librte_vhost.a 00:01:10.172 [684/707] Linking target app/dpdk-test-dma-perf 00:01:10.172 [685/707] Linking target app/dpdk-test-acl 00:01:10.172 [686/707] Linking target app/dpdk-pdump 00:01:10.172 [687/707] Linking target app/dpdk-test-cmdline 00:01:10.172 [688/707] Linking target app/dpdk-dumpcap 00:01:10.172 [689/707] Linking target app/dpdk-test-compress-perf 00:01:10.172 [690/707] Linking target app/dpdk-test-fib 00:01:10.172 [691/707] Linking target app/dpdk-test-gpudev 00:01:10.172 [692/707] Linking target app/dpdk-test-regex 00:01:10.172 [693/707] Linking target app/dpdk-test-bbdev 00:01:10.172 [694/707] Linking target app/dpdk-proc-info 00:01:10.172 [695/707] Linking target app/dpdk-test-flow-perf 00:01:10.172 [696/707] Linking target app/dpdk-graph 00:01:10.172 [697/707] Linking target app/dpdk-test-sad 00:01:10.431 [698/707] Linking target app/dpdk-test-mldev 00:01:10.431 [699/707] Linking target app/dpdk-test-pipeline 00:01:10.431 [700/707] Linking target app/dpdk-test-security-perf 00:01:10.431 [701/707] Linking target app/dpdk-test-crypto-perf 00:01:10.431 [702/707] Linking target app/dpdk-test-eventdev 00:01:10.431 [703/707] Linking target app/dpdk-testpmd 00:01:11.808 [704/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.809 [705/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.809 [706/707] Linking target lib/librte_vhost.so.24.0 00:01:11.809 [707/707] Linking target lib/librte_pipeline.so.24.0 00:01:11.809 17:11:50 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j112 install 00:01:11.809 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:11.809 [0/1] Installing files. 00:01:12.071 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:12.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.073 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.074 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.075 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:12.076 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:12.076 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.334 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.335 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:12.598 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:12.598 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:12.598 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.598 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:12.598 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.598 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.598 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.598 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.598 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.598 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.598 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:12.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:12.602 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:12.602 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:12.602 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:12.602 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:12.603 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:12.603 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:12.603 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:12.603 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:12.603 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:12.603 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:12.603 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:12.603 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:12.603 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:12.603 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:12.603 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:12.603 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:12.603 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:12.603 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:12.603 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:12.603 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:12.603 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:12.603 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:12.603 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:12.603 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:12.603 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:12.603 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:12.603 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:12.603 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:12.603 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:12.603 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:12.603 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:12.603 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:12.603 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:12.603 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:12.603 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:12.603 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:12.603 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:12.603 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:12.603 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:12.603 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:12.603 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:12.603 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:12.603 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:12.603 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:12.603 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:12.603 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:12.603 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:12.603 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:12.603 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:12.603 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:12.603 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:12.603 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:12.603 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:12.603 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:12.603 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:12.603 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:12.603 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:12.603 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:12.603 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:12.603 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:12.603 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:12.603 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:12.603 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:12.603 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:12.603 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:12.603 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:12.603 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:12.603 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:12.603 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:12.603 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:12.603 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:12.603 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:12.603 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:12.603 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:12.603 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:12.603 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:12.603 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:12.603 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:12.603 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:12.603 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:12.603 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:12.603 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:12.603 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:12.603 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:12.603 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:12.604 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:12.604 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:12.604 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:12.604 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:12.604 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:12.604 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:12.604 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:12.604 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:12.604 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:12.604 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:12.604 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:12.604 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:12.604 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:12.604 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:12.604 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:12.604 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:12.604 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:12.604 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:12.604 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:12.604 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:12.604 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:12.604 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:12.604 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:12.604 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:12.604 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:12.604 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:12.604 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:12.604 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:12.604 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:12.604 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:12.604 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:12.604 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:12.604 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:12.604 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:12.604 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:12.604 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:12.604 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:12.604 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:12.604 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:12.604 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:12.604 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:12.604 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:12.604 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:12.604 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:12.604 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:12.604 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:12.604 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:12.604 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:12.604 17:11:51 -- common/autobuild_common.sh@189 -- $ uname -s 00:01:12.604 17:11:51 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:12.604 17:11:51 -- common/autobuild_common.sh@200 -- $ cat 00:01:12.604 17:11:51 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.604 00:01:12.604 real 0m32.764s 00:01:12.604 user 10m21.533s 00:01:12.604 sys 2m19.409s 00:01:12.604 17:11:51 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:12.604 17:11:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.604 ************************************ 00:01:12.604 END TEST build_native_dpdk 00:01:12.604 ************************************ 00:01:12.604 17:11:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:12.604 17:11:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:12.604 17:11:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:12.604 17:11:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:12.604 17:11:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:12.604 17:11:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:12.604 17:11:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:12.604 17:11:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:12.863 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:12.863 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:12.863 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:13.122 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:13.380 Using 'verbs' RDMA provider 00:01:28.825 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:41.035 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:41.035 Creating mk/config.mk...done. 00:01:41.035 Creating mk/cc.flags.mk...done. 00:01:41.035 Type 'make' to build. 00:01:41.035 17:12:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:41.035 17:12:19 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:41.035 17:12:19 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:41.035 17:12:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.035 ************************************ 00:01:41.035 START TEST make 00:01:41.035 ************************************ 00:01:41.035 17:12:19 -- common/autotest_common.sh@1104 -- $ make -j112 00:01:41.035 make[1]: Nothing to be done for 'all'. 00:01:42.424 The Meson build system 00:01:42.424 Version: 1.3.1 00:01:42.424 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:42.424 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.424 Build type: native build 00:01:42.424 Project name: libvfio-user 00:01:42.425 Project version: 0.0.1 00:01:42.425 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:42.425 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:42.425 Host machine cpu family: x86_64 00:01:42.425 Host machine cpu: x86_64 00:01:42.425 Run-time dependency threads found: YES 00:01:42.425 Library dl found: YES 00:01:42.425 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:42.425 Run-time dependency json-c found: YES 0.17 00:01:42.425 Run-time dependency cmocka found: YES 1.1.7 00:01:42.425 Program pytest-3 found: NO 00:01:42.425 Program flake8 found: NO 00:01:42.425 Program misspell-fixer found: NO 00:01:42.425 Program restructuredtext-lint found: NO 00:01:42.425 Program valgrind found: YES (/usr/bin/valgrind) 00:01:42.425 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.425 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.425 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.425 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.425 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:42.425 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:42.425 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.425 Build targets in project: 8 00:01:42.425 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:42.425 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:42.425 00:01:42.425 libvfio-user 0.0.1 00:01:42.425 00:01:42.425 User defined options 00:01:42.425 buildtype : debug 00:01:42.425 default_library: shared 00:01:42.425 libdir : /usr/local/lib 00:01:42.425 00:01:42.425 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.990 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.990 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:42.990 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:42.990 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:42.990 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:42.990 [5/37] Compiling C object samples/null.p/null.c.o 00:01:43.247 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.247 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:43.247 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.247 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:43.247 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:43.247 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.247 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.247 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.247 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.247 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.247 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.247 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.247 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.247 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.247 [20/37] Compiling C object samples/server.p/server.c.o 00:01:43.247 [21/37] Compiling C object samples/client.p/client.c.o 00:01:43.247 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.247 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.247 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.247 [25/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.247 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.247 [27/37] Linking target samples/client 00:01:43.247 [28/37] Linking target test/unit_tests 00:01:43.247 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.504 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.504 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.761 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.761 [33/37] Linking target samples/server 00:01:43.761 [34/37] Linking target samples/gpio-pci-idio-16 00:01:43.761 [35/37] Linking target samples/null 00:01:43.761 [36/37] Linking target samples/lspci 00:01:43.761 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:43.761 INFO: autodetecting backend as ninja 00:01:43.761 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.761 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.333 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.333 ninja: no work to do. 00:01:54.305 CC lib/ut_mock/mock.o 00:01:54.305 CC lib/log/log.o 00:01:54.305 CC lib/log/log_flags.o 00:01:54.305 CC lib/log/log_deprecated.o 00:01:54.305 CC lib/ut/ut.o 00:01:54.564 LIB libspdk_ut.a 00:01:54.564 LIB libspdk_ut_mock.a 00:01:54.564 SO libspdk_ut.so.1.0 00:01:54.564 SO libspdk_ut_mock.so.5.0 00:01:54.564 LIB libspdk_log.a 00:01:54.564 SYMLINK libspdk_ut.so 00:01:54.564 SO libspdk_log.so.6.1 00:01:54.564 SYMLINK libspdk_ut_mock.so 00:01:54.564 SYMLINK libspdk_log.so 00:01:54.823 CC lib/util/base64.o 00:01:54.823 CC lib/util/bit_array.o 00:01:54.823 CC lib/util/cpuset.o 00:01:54.823 CC lib/util/crc32.o 00:01:54.823 CC lib/util/crc16.o 00:01:54.823 CC lib/util/crc32c.o 00:01:54.823 CC lib/util/crc32_ieee.o 00:01:54.823 CC lib/util/crc64.o 00:01:54.823 CC lib/util/dif.o 00:01:54.823 CC lib/dma/dma.o 00:01:54.823 CC lib/util/fd.o 00:01:54.823 CC lib/util/file.o 00:01:54.823 CC lib/util/hexlify.o 00:01:54.823 CC lib/util/iov.o 00:01:54.823 CC lib/util/math.o 00:01:54.823 CC lib/util/pipe.o 00:01:54.823 CC lib/util/strerror_tls.o 00:01:54.823 CC lib/util/string.o 00:01:54.823 CC lib/util/uuid.o 00:01:54.823 CC lib/ioat/ioat.o 00:01:54.823 CC lib/util/fd_group.o 00:01:54.823 CC lib/util/xor.o 00:01:54.823 CXX lib/trace_parser/trace.o 00:01:54.823 CC lib/util/zipf.o 00:01:55.081 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.081 CC lib/vfio_user/host/vfio_user.o 00:01:55.081 LIB libspdk_dma.a 00:01:55.081 SO libspdk_dma.so.3.0 00:01:55.081 SYMLINK libspdk_dma.so 00:01:55.081 LIB libspdk_ioat.a 00:01:55.339 SO libspdk_ioat.so.6.0 00:01:55.339 LIB libspdk_vfio_user.a 00:01:55.339 SYMLINK libspdk_ioat.so 00:01:55.339 SO libspdk_vfio_user.so.4.0 00:01:55.339 SYMLINK libspdk_vfio_user.so 00:01:55.983 LIB libspdk_trace_parser.a 00:01:55.983 SO libspdk_trace_parser.so.4.0 00:01:55.983 LIB libspdk_util.a 00:01:55.983 SYMLINK libspdk_trace_parser.so 00:01:55.983 SO libspdk_util.so.8.0 00:01:56.285 SYMLINK libspdk_util.so 00:01:56.285 CC lib/env_dpdk/env.o 00:01:56.285 CC lib/env_dpdk/memory.o 00:01:56.285 CC lib/env_dpdk/pci.o 00:01:56.285 CC lib/conf/conf.o 00:01:56.285 CC lib/env_dpdk/threads.o 00:01:56.285 CC lib/env_dpdk/init.o 00:01:56.285 CC lib/json/json_parse.o 00:01:56.285 CC lib/json/json_util.o 00:01:56.285 CC lib/env_dpdk/pci_ioat.o 00:01:56.285 CC lib/json/json_write.o 00:01:56.285 CC lib/env_dpdk/pci_virtio.o 00:01:56.285 CC lib/env_dpdk/pci_vmd.o 00:01:56.285 CC lib/env_dpdk/pci_idxd.o 00:01:56.285 CC lib/env_dpdk/pci_event.o 00:01:56.285 CC lib/env_dpdk/sigbus_handler.o 00:01:56.285 CC lib/vmd/vmd.o 00:01:56.285 CC lib/rdma/common.o 00:01:56.285 CC lib/env_dpdk/pci_dpdk.o 00:01:56.285 CC lib/vmd/led.o 00:01:56.285 CC lib/idxd/idxd.o 00:01:56.285 CC lib/rdma/rdma_verbs.o 00:01:56.285 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:56.285 CC lib/idxd/idxd_user.o 00:01:56.285 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:56.285 CC lib/idxd/idxd_kernel.o 00:01:56.544 LIB libspdk_conf.a 00:01:56.802 SO libspdk_conf.so.5.0 00:01:56.802 LIB libspdk_json.a 00:01:56.802 SO libspdk_json.so.5.1 00:01:56.802 SYMLINK libspdk_conf.so 00:01:56.802 SYMLINK libspdk_json.so 00:01:57.061 LIB libspdk_idxd.a 00:01:57.061 LIB libspdk_rdma.a 00:01:57.061 SO libspdk_rdma.so.5.0 00:01:57.061 SO libspdk_idxd.so.11.0 00:01:57.061 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.061 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.061 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.061 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:57.061 LIB libspdk_vmd.a 00:01:57.061 SYMLINK libspdk_rdma.so 00:01:57.061 SYMLINK libspdk_idxd.so 00:01:57.061 SO libspdk_vmd.so.5.0 00:01:57.061 SYMLINK libspdk_vmd.so 00:01:57.630 LIB libspdk_jsonrpc.a 00:01:57.630 SO libspdk_jsonrpc.so.5.1 00:01:57.630 SYMLINK libspdk_jsonrpc.so 00:01:57.889 CC lib/rpc/rpc.o 00:01:57.889 LIB libspdk_env_dpdk.a 00:01:57.889 SO libspdk_env_dpdk.so.13.0 00:01:58.148 LIB libspdk_rpc.a 00:01:58.148 SYMLINK libspdk_env_dpdk.so 00:01:58.148 SO libspdk_rpc.so.5.0 00:01:58.148 SYMLINK libspdk_rpc.so 00:01:58.406 CC lib/trace/trace.o 00:01:58.406 CC lib/trace/trace_flags.o 00:01:58.406 CC lib/trace/trace_rpc.o 00:01:58.406 CC lib/notify/notify.o 00:01:58.406 CC lib/notify/notify_rpc.o 00:01:58.406 CC lib/sock/sock.o 00:01:58.406 CC lib/sock/sock_rpc.o 00:01:58.665 LIB libspdk_notify.a 00:01:58.665 SO libspdk_notify.so.5.0 00:01:58.665 SYMLINK libspdk_notify.so 00:01:58.665 LIB libspdk_trace.a 00:01:58.665 SO libspdk_trace.so.9.0 00:01:58.665 LIB libspdk_sock.a 00:01:58.923 SO libspdk_sock.so.8.0 00:01:58.923 SYMLINK libspdk_trace.so 00:01:58.923 SYMLINK libspdk_sock.so 00:01:58.923 CC lib/thread/thread.o 00:01:58.923 CC lib/thread/iobuf.o 00:01:59.180 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.180 CC lib/nvme/nvme_ctrlr.o 00:01:59.181 CC lib/nvme/nvme_fabric.o 00:01:59.181 CC lib/nvme/nvme_ns.o 00:01:59.181 CC lib/nvme/nvme_ns_cmd.o 00:01:59.181 CC lib/nvme/nvme_pcie_common.o 00:01:59.181 CC lib/nvme/nvme_pcie.o 00:01:59.181 CC lib/nvme/nvme_qpair.o 00:01:59.181 CC lib/nvme/nvme.o 00:01:59.181 CC lib/nvme/nvme_quirks.o 00:01:59.181 CC lib/nvme/nvme_transport.o 00:01:59.181 CC lib/nvme/nvme_discovery.o 00:01:59.181 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.181 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.181 CC lib/nvme/nvme_tcp.o 00:01:59.181 CC lib/nvme/nvme_opal.o 00:01:59.181 CC lib/nvme/nvme_io_msg.o 00:01:59.181 CC lib/nvme/nvme_poll_group.o 00:01:59.181 CC lib/nvme/nvme_zns.o 00:01:59.181 CC lib/nvme/nvme_cuse.o 00:01:59.181 CC lib/nvme/nvme_vfio_user.o 00:01:59.181 CC lib/nvme/nvme_rdma.o 00:02:00.556 LIB libspdk_thread.a 00:02:00.556 SO libspdk_thread.so.9.0 00:02:00.556 SYMLINK libspdk_thread.so 00:02:00.815 CC lib/vfu_tgt/tgt_endpoint.o 00:02:00.815 CC lib/accel/accel.o 00:02:00.815 CC lib/vfu_tgt/tgt_rpc.o 00:02:00.815 CC lib/accel/accel_rpc.o 00:02:00.815 CC lib/accel/accel_sw.o 00:02:00.815 CC lib/blob/blobstore.o 00:02:00.815 CC lib/blob/request.o 00:02:00.815 CC lib/init/json_config.o 00:02:00.815 CC lib/virtio/virtio.o 00:02:00.815 CC lib/init/subsystem.o 00:02:00.815 CC lib/blob/zeroes.o 00:02:00.815 CC lib/blob/blob_bs_dev.o 00:02:00.815 CC lib/virtio/virtio_vhost_user.o 00:02:00.815 CC lib/init/subsystem_rpc.o 00:02:00.815 CC lib/virtio/virtio_vfio_user.o 00:02:00.815 CC lib/init/rpc.o 00:02:00.815 CC lib/virtio/virtio_pci.o 00:02:01.073 LIB libspdk_init.a 00:02:01.073 SO libspdk_init.so.4.0 00:02:01.331 LIB libspdk_vfu_tgt.a 00:02:01.331 LIB libspdk_virtio.a 00:02:01.331 SO libspdk_vfu_tgt.so.2.0 00:02:01.331 SYMLINK libspdk_init.so 00:02:01.331 SO libspdk_virtio.so.6.0 00:02:01.331 LIB libspdk_nvme.a 00:02:01.331 SYMLINK libspdk_vfu_tgt.so 00:02:01.331 SYMLINK libspdk_virtio.so 00:02:01.331 SO libspdk_nvme.so.12.0 00:02:01.331 CC lib/event/app.o 00:02:01.331 CC lib/event/reactor.o 00:02:01.590 CC lib/event/log_rpc.o 00:02:01.590 CC lib/event/app_rpc.o 00:02:01.590 CC lib/event/scheduler_static.o 00:02:01.590 SYMLINK libspdk_nvme.so 00:02:01.849 LIB libspdk_accel.a 00:02:01.849 LIB libspdk_event.a 00:02:01.849 SO libspdk_accel.so.14.0 00:02:01.849 SO libspdk_event.so.12.0 00:02:02.107 SYMLINK libspdk_accel.so 00:02:02.107 SYMLINK libspdk_event.so 00:02:02.107 CC lib/bdev/bdev.o 00:02:02.107 CC lib/bdev/bdev_rpc.o 00:02:02.107 CC lib/bdev/bdev_zone.o 00:02:02.107 CC lib/bdev/part.o 00:02:02.107 CC lib/bdev/scsi_nvme.o 00:02:04.010 LIB libspdk_blob.a 00:02:04.010 SO libspdk_blob.so.10.1 00:02:04.010 SYMLINK libspdk_blob.so 00:02:04.010 CC lib/blobfs/blobfs.o 00:02:04.010 CC lib/blobfs/tree.o 00:02:04.010 CC lib/lvol/lvol.o 00:02:04.963 LIB libspdk_bdev.a 00:02:04.963 LIB libspdk_blobfs.a 00:02:04.963 SO libspdk_bdev.so.14.0 00:02:04.963 SO libspdk_blobfs.so.9.0 00:02:04.963 LIB libspdk_lvol.a 00:02:04.963 SYMLINK libspdk_blobfs.so 00:02:04.963 SO libspdk_lvol.so.9.1 00:02:04.963 SYMLINK libspdk_bdev.so 00:02:04.963 SYMLINK libspdk_lvol.so 00:02:05.222 CC lib/ublk/ublk.o 00:02:05.222 CC lib/ublk/ublk_rpc.o 00:02:05.222 CC lib/nbd/nbd.o 00:02:05.222 CC lib/nbd/nbd_rpc.o 00:02:05.222 CC lib/scsi/dev.o 00:02:05.222 CC lib/nvmf/ctrlr.o 00:02:05.222 CC lib/nvmf/ctrlr_discovery.o 00:02:05.222 CC lib/ftl/ftl_init.o 00:02:05.222 CC lib/ftl/ftl_core.o 00:02:05.222 CC lib/scsi/lun.o 00:02:05.222 CC lib/nvmf/ctrlr_bdev.o 00:02:05.222 CC lib/scsi/port.o 00:02:05.222 CC lib/nvmf/subsystem.o 00:02:05.222 CC lib/scsi/scsi_bdev.o 00:02:05.222 CC lib/nvmf/nvmf_rpc.o 00:02:05.222 CC lib/nvmf/nvmf.o 00:02:05.222 CC lib/scsi/scsi.o 00:02:05.222 CC lib/ftl/ftl_layout.o 00:02:05.222 CC lib/ftl/ftl_io.o 00:02:05.222 CC lib/ftl/ftl_debug.o 00:02:05.222 CC lib/scsi/scsi_pr.o 00:02:05.222 CC lib/ftl/ftl_sb.o 00:02:05.222 CC lib/scsi/scsi_rpc.o 00:02:05.222 CC lib/nvmf/transport.o 00:02:05.222 CC lib/ftl/ftl_l2p.o 00:02:05.222 CC lib/scsi/task.o 00:02:05.222 CC lib/nvmf/tcp.o 00:02:05.222 CC lib/ftl/ftl_l2p_flat.o 00:02:05.222 CC lib/ftl/ftl_nv_cache.o 00:02:05.222 CC lib/nvmf/vfio_user.o 00:02:05.222 CC lib/nvmf/rdma.o 00:02:05.222 CC lib/ftl/ftl_band.o 00:02:05.222 CC lib/ftl/ftl_band_ops.o 00:02:05.222 CC lib/ftl/ftl_writer.o 00:02:05.222 CC lib/ftl/ftl_rq.o 00:02:05.222 CC lib/ftl/ftl_reloc.o 00:02:05.222 CC lib/ftl/ftl_l2p_cache.o 00:02:05.222 CC lib/ftl/ftl_p2l.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:05.222 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:05.222 CC lib/ftl/utils/ftl_conf.o 00:02:05.222 CC lib/ftl/utils/ftl_md.o 00:02:05.222 CC lib/ftl/utils/ftl_mempool.o 00:02:05.222 CC lib/ftl/utils/ftl_bitmap.o 00:02:05.222 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:05.222 CC lib/ftl/utils/ftl_property.o 00:02:05.222 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:05.222 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:05.222 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:05.222 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:05.222 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:05.222 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:05.222 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:05.222 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:05.222 CC lib/ftl/base/ftl_base_dev.o 00:02:05.222 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:05.222 CC lib/ftl/ftl_trace.o 00:02:05.222 CC lib/ftl/base/ftl_base_bdev.o 00:02:06.157 LIB libspdk_scsi.a 00:02:06.157 SO libspdk_scsi.so.8.0 00:02:06.157 LIB libspdk_ublk.a 00:02:06.157 SYMLINK libspdk_scsi.so 00:02:06.157 SO libspdk_ublk.so.2.0 00:02:06.157 LIB libspdk_nbd.a 00:02:06.157 SYMLINK libspdk_ublk.so 00:02:06.157 SO libspdk_nbd.so.6.0 00:02:06.157 CC lib/iscsi/conn.o 00:02:06.157 CC lib/iscsi/init_grp.o 00:02:06.157 CC lib/iscsi/iscsi.o 00:02:06.157 CC lib/iscsi/md5.o 00:02:06.157 CC lib/iscsi/param.o 00:02:06.157 CC lib/iscsi/portal_grp.o 00:02:06.157 CC lib/iscsi/tgt_node.o 00:02:06.157 CC lib/vhost/vhost.o 00:02:06.157 CC lib/iscsi/iscsi_subsystem.o 00:02:06.157 CC lib/vhost/vhost_rpc.o 00:02:06.416 CC lib/iscsi/iscsi_rpc.o 00:02:06.416 CC lib/vhost/vhost_scsi.o 00:02:06.416 CC lib/iscsi/task.o 00:02:06.416 CC lib/vhost/vhost_blk.o 00:02:06.416 CC lib/vhost/rte_vhost_user.o 00:02:06.416 SYMLINK libspdk_nbd.so 00:02:06.416 LIB libspdk_ftl.a 00:02:06.674 SO libspdk_ftl.so.8.0 00:02:06.932 SYMLINK libspdk_ftl.so 00:02:07.191 LIB libspdk_iscsi.a 00:02:07.191 SO libspdk_iscsi.so.7.0 00:02:07.450 LIB libspdk_vhost.a 00:02:07.450 SYMLINK libspdk_iscsi.so 00:02:07.450 SO libspdk_vhost.so.7.1 00:02:07.709 SYMLINK libspdk_vhost.so 00:02:08.648 LIB libspdk_nvmf.a 00:02:08.648 SO libspdk_nvmf.so.17.0 00:02:08.907 SYMLINK libspdk_nvmf.so 00:02:09.165 CC module/env_dpdk/env_dpdk_rpc.o 00:02:09.165 CC module/vfu_device/vfu_virtio.o 00:02:09.165 CC module/vfu_device/vfu_virtio_scsi.o 00:02:09.165 CC module/vfu_device/vfu_virtio_blk.o 00:02:09.165 CC module/vfu_device/vfu_virtio_rpc.o 00:02:09.165 CC module/blob/bdev/blob_bdev.o 00:02:09.165 CC module/sock/posix/posix.o 00:02:09.165 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:09.165 CC module/accel/iaa/accel_iaa.o 00:02:09.165 CC module/accel/dsa/accel_dsa.o 00:02:09.165 CC module/accel/iaa/accel_iaa_rpc.o 00:02:09.165 CC module/accel/dsa/accel_dsa_rpc.o 00:02:09.165 CC module/accel/ioat/accel_ioat.o 00:02:09.165 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:09.165 CC module/accel/ioat/accel_ioat_rpc.o 00:02:09.165 CC module/accel/error/accel_error.o 00:02:09.165 CC module/scheduler/gscheduler/gscheduler.o 00:02:09.165 CC module/accel/error/accel_error_rpc.o 00:02:09.422 LIB libspdk_env_dpdk_rpc.a 00:02:09.422 SO libspdk_env_dpdk_rpc.so.5.0 00:02:09.422 SYMLINK libspdk_env_dpdk_rpc.so 00:02:09.422 LIB libspdk_scheduler_gscheduler.a 00:02:09.422 LIB libspdk_scheduler_dpdk_governor.a 00:02:09.422 LIB libspdk_accel_dsa.a 00:02:09.422 LIB libspdk_accel_error.a 00:02:09.422 SO libspdk_scheduler_gscheduler.so.3.0 00:02:09.422 LIB libspdk_scheduler_dynamic.a 00:02:09.422 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:09.422 LIB libspdk_accel_ioat.a 00:02:09.422 SO libspdk_accel_error.so.1.0 00:02:09.422 SO libspdk_accel_dsa.so.4.0 00:02:09.422 SO libspdk_scheduler_dynamic.so.3.0 00:02:09.422 SO libspdk_accel_ioat.so.5.0 00:02:09.422 SYMLINK libspdk_scheduler_gscheduler.so 00:02:09.422 LIB libspdk_blob_bdev.a 00:02:09.422 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:09.679 SYMLINK libspdk_scheduler_dynamic.so 00:02:09.679 SYMLINK libspdk_accel_error.so 00:02:09.679 SYMLINK libspdk_accel_dsa.so 00:02:09.679 SO libspdk_blob_bdev.so.10.1 00:02:09.679 SYMLINK libspdk_accel_ioat.so 00:02:09.679 LIB libspdk_vfu_device.a 00:02:09.679 SYMLINK libspdk_blob_bdev.so 00:02:09.679 SO libspdk_vfu_device.so.2.0 00:02:09.679 LIB libspdk_accel_iaa.a 00:02:09.679 SO libspdk_accel_iaa.so.2.0 00:02:09.679 SYMLINK libspdk_vfu_device.so 00:02:09.679 SYMLINK libspdk_accel_iaa.so 00:02:09.937 CC module/bdev/lvol/vbdev_lvol.o 00:02:09.937 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:09.937 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:09.937 CC module/bdev/delay/vbdev_delay.o 00:02:09.937 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:09.937 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:09.937 CC module/bdev/ftl/bdev_ftl.o 00:02:09.937 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:09.937 CC module/bdev/aio/bdev_aio.o 00:02:09.937 CC module/bdev/gpt/gpt.o 00:02:09.937 CC module/bdev/passthru/vbdev_passthru.o 00:02:09.937 CC module/bdev/malloc/bdev_malloc.o 00:02:09.937 CC module/bdev/aio/bdev_aio_rpc.o 00:02:09.937 CC module/bdev/error/vbdev_error_rpc.o 00:02:09.937 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:09.937 CC module/bdev/gpt/vbdev_gpt.o 00:02:09.937 CC module/bdev/error/vbdev_error.o 00:02:09.937 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:09.938 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:09.938 CC module/bdev/split/vbdev_split.o 00:02:09.938 CC module/bdev/split/vbdev_split_rpc.o 00:02:09.938 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:09.938 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:09.938 CC module/blobfs/bdev/blobfs_bdev.o 00:02:09.938 CC module/bdev/raid/bdev_raid.o 00:02:09.938 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:09.938 CC module/bdev/null/bdev_null.o 00:02:09.938 CC module/bdev/nvme/bdev_nvme.o 00:02:09.938 CC module/bdev/raid/bdev_raid_rpc.o 00:02:09.938 CC module/bdev/null/bdev_null_rpc.o 00:02:09.938 CC module/bdev/iscsi/bdev_iscsi.o 00:02:09.938 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:09.938 CC module/bdev/nvme/nvme_rpc.o 00:02:09.938 CC module/bdev/raid/bdev_raid_sb.o 00:02:09.938 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:09.938 CC module/bdev/nvme/bdev_mdns_client.o 00:02:09.938 CC module/bdev/raid/raid0.o 00:02:09.938 CC module/bdev/raid/raid1.o 00:02:09.938 CC module/bdev/nvme/vbdev_opal.o 00:02:09.938 CC module/bdev/raid/concat.o 00:02:09.938 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:09.938 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:09.938 LIB libspdk_sock_posix.a 00:02:09.938 SO libspdk_sock_posix.so.5.0 00:02:10.196 SYMLINK libspdk_sock_posix.so 00:02:10.196 LIB libspdk_blobfs_bdev.a 00:02:10.196 LIB libspdk_bdev_null.a 00:02:10.196 SO libspdk_blobfs_bdev.so.5.0 00:02:10.456 LIB libspdk_bdev_ftl.a 00:02:10.456 SO libspdk_bdev_null.so.5.0 00:02:10.456 LIB libspdk_bdev_error.a 00:02:10.456 LIB libspdk_bdev_malloc.a 00:02:10.456 SYMLINK libspdk_blobfs_bdev.so 00:02:10.456 LIB libspdk_bdev_gpt.a 00:02:10.456 SO libspdk_bdev_ftl.so.5.0 00:02:10.456 SO libspdk_bdev_error.so.5.0 00:02:10.456 SO libspdk_bdev_malloc.so.5.0 00:02:10.456 LIB libspdk_bdev_passthru.a 00:02:10.456 LIB libspdk_bdev_delay.a 00:02:10.456 SO libspdk_bdev_gpt.so.5.0 00:02:10.456 SYMLINK libspdk_bdev_null.so 00:02:10.456 LIB libspdk_bdev_iscsi.a 00:02:10.456 SYMLINK libspdk_bdev_ftl.so 00:02:10.456 SO libspdk_bdev_passthru.so.5.0 00:02:10.456 SO libspdk_bdev_delay.so.5.0 00:02:10.456 SYMLINK libspdk_bdev_error.so 00:02:10.456 SYMLINK libspdk_bdev_malloc.so 00:02:10.456 SO libspdk_bdev_iscsi.so.5.0 00:02:10.456 SYMLINK libspdk_bdev_gpt.so 00:02:10.456 LIB libspdk_bdev_split.a 00:02:10.456 SYMLINK libspdk_bdev_passthru.so 00:02:10.456 SYMLINK libspdk_bdev_delay.so 00:02:10.456 SO libspdk_bdev_split.so.5.0 00:02:10.456 LIB libspdk_bdev_lvol.a 00:02:10.456 SYMLINK libspdk_bdev_iscsi.so 00:02:10.456 LIB libspdk_bdev_virtio.a 00:02:10.715 SO libspdk_bdev_lvol.so.5.0 00:02:10.715 SYMLINK libspdk_bdev_split.so 00:02:10.715 SO libspdk_bdev_virtio.so.5.0 00:02:10.715 SYMLINK libspdk_bdev_lvol.so 00:02:10.715 LIB libspdk_bdev_aio.a 00:02:10.715 SYMLINK libspdk_bdev_virtio.so 00:02:10.715 SO libspdk_bdev_aio.so.5.0 00:02:10.715 LIB libspdk_bdev_zone_block.a 00:02:10.715 SO libspdk_bdev_zone_block.so.5.0 00:02:10.715 SYMLINK libspdk_bdev_aio.so 00:02:10.715 SYMLINK libspdk_bdev_zone_block.so 00:02:10.974 LIB libspdk_bdev_raid.a 00:02:10.974 SO libspdk_bdev_raid.so.5.0 00:02:11.233 SYMLINK libspdk_bdev_raid.so 00:02:12.170 LIB libspdk_bdev_nvme.a 00:02:12.170 SO libspdk_bdev_nvme.so.6.0 00:02:12.429 SYMLINK libspdk_bdev_nvme.so 00:02:12.996 CC module/event/subsystems/iobuf/iobuf.o 00:02:12.996 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:12.996 CC module/event/subsystems/vmd/vmd.o 00:02:12.996 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:12.996 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:12.996 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:12.996 CC module/event/subsystems/scheduler/scheduler.o 00:02:12.996 CC module/event/subsystems/sock/sock.o 00:02:12.996 LIB libspdk_event_sock.a 00:02:12.996 LIB libspdk_event_scheduler.a 00:02:12.996 LIB libspdk_event_vmd.a 00:02:12.996 LIB libspdk_event_vhost_blk.a 00:02:12.996 LIB libspdk_event_vfu_tgt.a 00:02:12.996 LIB libspdk_event_iobuf.a 00:02:12.996 SO libspdk_event_sock.so.4.0 00:02:12.996 SO libspdk_event_scheduler.so.3.0 00:02:12.996 SO libspdk_event_vmd.so.5.0 00:02:12.996 SO libspdk_event_vhost_blk.so.2.0 00:02:12.996 SO libspdk_event_vfu_tgt.so.2.0 00:02:12.996 SO libspdk_event_iobuf.so.2.0 00:02:13.255 SYMLINK libspdk_event_sock.so 00:02:13.255 SYMLINK libspdk_event_scheduler.so 00:02:13.255 SYMLINK libspdk_event_vhost_blk.so 00:02:13.255 SYMLINK libspdk_event_vmd.so 00:02:13.255 SYMLINK libspdk_event_iobuf.so 00:02:13.255 SYMLINK libspdk_event_vfu_tgt.so 00:02:13.255 CC module/event/subsystems/accel/accel.o 00:02:13.514 LIB libspdk_event_accel.a 00:02:13.514 SO libspdk_event_accel.so.5.0 00:02:13.514 SYMLINK libspdk_event_accel.so 00:02:13.773 CC module/event/subsystems/bdev/bdev.o 00:02:14.340 LIB libspdk_event_bdev.a 00:02:14.340 SO libspdk_event_bdev.so.5.0 00:02:14.340 SYMLINK libspdk_event_bdev.so 00:02:14.598 CC module/event/subsystems/scsi/scsi.o 00:02:14.598 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:14.598 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:14.598 CC module/event/subsystems/nbd/nbd.o 00:02:14.598 CC module/event/subsystems/ublk/ublk.o 00:02:14.598 LIB libspdk_event_scsi.a 00:02:14.598 LIB libspdk_event_nbd.a 00:02:14.598 LIB libspdk_event_ublk.a 00:02:14.598 SO libspdk_event_scsi.so.5.0 00:02:14.598 SO libspdk_event_nbd.so.5.0 00:02:14.598 SO libspdk_event_ublk.so.2.0 00:02:14.598 SYMLINK libspdk_event_scsi.so 00:02:14.857 LIB libspdk_event_nvmf.a 00:02:14.857 SYMLINK libspdk_event_nbd.so 00:02:14.857 SO libspdk_event_nvmf.so.5.0 00:02:14.857 SYMLINK libspdk_event_ublk.so 00:02:14.857 SYMLINK libspdk_event_nvmf.so 00:02:14.857 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:14.857 CC module/event/subsystems/iscsi/iscsi.o 00:02:15.114 LIB libspdk_event_iscsi.a 00:02:15.114 SO libspdk_event_iscsi.so.5.0 00:02:15.114 LIB libspdk_event_vhost_scsi.a 00:02:15.114 SYMLINK libspdk_event_iscsi.so 00:02:15.373 SO libspdk_event_vhost_scsi.so.2.0 00:02:15.373 SYMLINK libspdk_event_vhost_scsi.so 00:02:15.373 SO libspdk.so.5.0 00:02:15.373 SYMLINK libspdk.so 00:02:15.632 CXX app/trace/trace.o 00:02:15.632 CC app/trace_record/trace_record.o 00:02:15.632 CC app/spdk_nvme_perf/perf.o 00:02:15.632 CC app/spdk_nvme_discover/discovery_aer.o 00:02:15.632 CC app/spdk_nvme_identify/identify.o 00:02:15.632 CC app/spdk_lspci/spdk_lspci.o 00:02:15.632 TEST_HEADER include/spdk/accel.h 00:02:15.632 TEST_HEADER include/spdk/accel_module.h 00:02:15.632 TEST_HEADER include/spdk/barrier.h 00:02:15.632 TEST_HEADER include/spdk/assert.h 00:02:15.632 CC app/spdk_top/spdk_top.o 00:02:15.632 CC test/rpc_client/rpc_client_test.o 00:02:15.632 TEST_HEADER include/spdk/bdev_module.h 00:02:15.632 TEST_HEADER include/spdk/bdev.h 00:02:15.632 TEST_HEADER include/spdk/base64.h 00:02:15.632 TEST_HEADER include/spdk/bdev_zone.h 00:02:15.899 TEST_HEADER include/spdk/bit_array.h 00:02:15.899 TEST_HEADER include/spdk/bit_pool.h 00:02:15.899 TEST_HEADER include/spdk/blob_bdev.h 00:02:15.899 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:15.899 TEST_HEADER include/spdk/blobfs.h 00:02:15.899 TEST_HEADER include/spdk/blob.h 00:02:15.899 TEST_HEADER include/spdk/conf.h 00:02:15.899 TEST_HEADER include/spdk/config.h 00:02:15.899 TEST_HEADER include/spdk/cpuset.h 00:02:15.899 TEST_HEADER include/spdk/crc16.h 00:02:15.900 TEST_HEADER include/spdk/crc32.h 00:02:15.900 TEST_HEADER include/spdk/crc64.h 00:02:15.900 TEST_HEADER include/spdk/dif.h 00:02:15.900 TEST_HEADER include/spdk/endian.h 00:02:15.900 TEST_HEADER include/spdk/env_dpdk.h 00:02:15.900 TEST_HEADER include/spdk/dma.h 00:02:15.900 TEST_HEADER include/spdk/env.h 00:02:15.900 TEST_HEADER include/spdk/fd.h 00:02:15.900 TEST_HEADER include/spdk/event.h 00:02:15.900 TEST_HEADER include/spdk/file.h 00:02:15.900 TEST_HEADER include/spdk/fd_group.h 00:02:15.900 TEST_HEADER include/spdk/ftl.h 00:02:15.900 TEST_HEADER include/spdk/gpt_spec.h 00:02:15.900 TEST_HEADER include/spdk/idxd.h 00:02:15.900 TEST_HEADER include/spdk/hexlify.h 00:02:15.900 TEST_HEADER include/spdk/histogram_data.h 00:02:15.900 TEST_HEADER include/spdk/idxd_spec.h 00:02:15.900 CC app/iscsi_tgt/iscsi_tgt.o 00:02:15.900 TEST_HEADER include/spdk/init.h 00:02:15.900 TEST_HEADER include/spdk/ioat.h 00:02:15.900 TEST_HEADER include/spdk/ioat_spec.h 00:02:15.900 TEST_HEADER include/spdk/iscsi_spec.h 00:02:15.900 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:15.900 TEST_HEADER include/spdk/json.h 00:02:15.900 TEST_HEADER include/spdk/jsonrpc.h 00:02:15.900 TEST_HEADER include/spdk/likely.h 00:02:15.900 CC app/spdk_dd/spdk_dd.o 00:02:15.900 TEST_HEADER include/spdk/lvol.h 00:02:15.900 TEST_HEADER include/spdk/memory.h 00:02:15.900 TEST_HEADER include/spdk/log.h 00:02:15.900 TEST_HEADER include/spdk/mmio.h 00:02:15.900 TEST_HEADER include/spdk/nbd.h 00:02:15.900 TEST_HEADER include/spdk/notify.h 00:02:15.900 TEST_HEADER include/spdk/nvme.h 00:02:15.900 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:15.900 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:15.900 TEST_HEADER include/spdk/nvme_spec.h 00:02:15.900 TEST_HEADER include/spdk/nvme_intel.h 00:02:15.900 TEST_HEADER include/spdk/nvme_zns.h 00:02:15.900 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:15.900 TEST_HEADER include/spdk/nvmf.h 00:02:15.900 TEST_HEADER include/spdk/nvmf_spec.h 00:02:15.900 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:15.900 TEST_HEADER include/spdk/nvmf_transport.h 00:02:15.900 TEST_HEADER include/spdk/opal.h 00:02:15.900 TEST_HEADER include/spdk/opal_spec.h 00:02:15.900 CC app/spdk_tgt/spdk_tgt.o 00:02:15.900 TEST_HEADER include/spdk/pci_ids.h 00:02:15.900 TEST_HEADER include/spdk/pipe.h 00:02:15.900 CC app/nvmf_tgt/nvmf_main.o 00:02:15.900 CC app/vhost/vhost.o 00:02:15.900 TEST_HEADER include/spdk/queue.h 00:02:15.900 TEST_HEADER include/spdk/reduce.h 00:02:15.900 TEST_HEADER include/spdk/scheduler.h 00:02:15.900 TEST_HEADER include/spdk/rpc.h 00:02:15.900 TEST_HEADER include/spdk/scsi_spec.h 00:02:15.900 TEST_HEADER include/spdk/scsi.h 00:02:15.900 TEST_HEADER include/spdk/sock.h 00:02:15.900 TEST_HEADER include/spdk/stdinc.h 00:02:15.900 TEST_HEADER include/spdk/string.h 00:02:15.900 TEST_HEADER include/spdk/thread.h 00:02:15.900 TEST_HEADER include/spdk/trace.h 00:02:15.900 TEST_HEADER include/spdk/trace_parser.h 00:02:15.900 TEST_HEADER include/spdk/tree.h 00:02:15.900 TEST_HEADER include/spdk/ublk.h 00:02:15.900 TEST_HEADER include/spdk/util.h 00:02:15.900 TEST_HEADER include/spdk/uuid.h 00:02:15.900 TEST_HEADER include/spdk/version.h 00:02:15.900 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:15.900 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:15.900 TEST_HEADER include/spdk/vhost.h 00:02:15.900 TEST_HEADER include/spdk/vmd.h 00:02:15.900 TEST_HEADER include/spdk/xor.h 00:02:15.900 TEST_HEADER include/spdk/zipf.h 00:02:15.900 CXX test/cpp_headers/assert.o 00:02:15.900 CXX test/cpp_headers/accel_module.o 00:02:15.900 CXX test/cpp_headers/barrier.o 00:02:15.900 CXX test/cpp_headers/accel.o 00:02:15.900 CXX test/cpp_headers/base64.o 00:02:15.900 CXX test/cpp_headers/bdev.o 00:02:15.900 CXX test/cpp_headers/bdev_module.o 00:02:15.900 CXX test/cpp_headers/bit_array.o 00:02:15.900 CXX test/cpp_headers/bdev_zone.o 00:02:15.900 CXX test/cpp_headers/bit_pool.o 00:02:15.900 CXX test/cpp_headers/blobfs_bdev.o 00:02:15.900 CXX test/cpp_headers/blobfs.o 00:02:15.900 CXX test/cpp_headers/blob_bdev.o 00:02:15.900 CXX test/cpp_headers/blob.o 00:02:15.900 CXX test/cpp_headers/conf.o 00:02:15.900 CXX test/cpp_headers/config.o 00:02:15.900 CXX test/cpp_headers/cpuset.o 00:02:15.900 CXX test/cpp_headers/crc16.o 00:02:15.900 CXX test/cpp_headers/crc32.o 00:02:15.900 CXX test/cpp_headers/crc64.o 00:02:15.900 CXX test/cpp_headers/dma.o 00:02:15.900 CXX test/cpp_headers/dif.o 00:02:15.900 CXX test/cpp_headers/endian.o 00:02:15.900 CXX test/cpp_headers/env.o 00:02:15.900 CXX test/cpp_headers/event.o 00:02:15.900 CXX test/cpp_headers/env_dpdk.o 00:02:15.900 CXX test/cpp_headers/fd_group.o 00:02:15.900 CXX test/cpp_headers/fd.o 00:02:15.900 CXX test/cpp_headers/file.o 00:02:15.900 CXX test/cpp_headers/gpt_spec.o 00:02:15.900 CXX test/cpp_headers/ftl.o 00:02:15.900 CXX test/cpp_headers/hexlify.o 00:02:15.900 CXX test/cpp_headers/histogram_data.o 00:02:15.900 CXX test/cpp_headers/idxd.o 00:02:15.900 CXX test/cpp_headers/idxd_spec.o 00:02:15.900 CXX test/cpp_headers/init.o 00:02:15.900 CC test/nvme/overhead/overhead.o 00:02:15.900 CXX test/cpp_headers/ioat.o 00:02:15.900 CC examples/ioat/verify/verify.o 00:02:15.900 CC examples/sock/hello_world/hello_sock.o 00:02:15.900 CC test/app/jsoncat/jsoncat.o 00:02:15.900 CC test/nvme/sgl/sgl.o 00:02:15.900 CC test/event/event_perf/event_perf.o 00:02:15.900 CC examples/ioat/perf/perf.o 00:02:15.900 CC examples/util/zipf/zipf.o 00:02:15.900 CC test/accel/dif/dif.o 00:02:15.900 CC examples/vmd/lsvmd/lsvmd.o 00:02:15.900 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:15.900 CC test/app/histogram_perf/histogram_perf.o 00:02:15.900 CC test/env/vtophys/vtophys.o 00:02:15.900 CC test/event/reactor_perf/reactor_perf.o 00:02:15.900 CC examples/nvme/hello_world/hello_world.o 00:02:15.900 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.900 CC examples/nvme/reconnect/reconnect.o 00:02:15.900 CC examples/nvme/abort/abort.o 00:02:15.900 CC test/env/pci/pci_ut.o 00:02:15.900 CC examples/vmd/led/led.o 00:02:15.900 CC examples/nvme/arbitration/arbitration.o 00:02:15.900 CC test/thread/poller_perf/poller_perf.o 00:02:15.900 CC test/env/memory/memory_ut.o 00:02:15.900 CC test/app/stub/stub.o 00:02:15.900 CC examples/blob/hello_world/hello_blob.o 00:02:15.900 CC test/nvme/startup/startup.o 00:02:15.900 CC examples/nvme/hotplug/hotplug.o 00:02:15.900 CC test/event/reactor/reactor.o 00:02:15.900 CC test/event/app_repeat/app_repeat.o 00:02:15.900 CC examples/idxd/perf/perf.o 00:02:15.900 CC test/nvme/reserve/reserve.o 00:02:15.900 CC app/fio/nvme/fio_plugin.o 00:02:15.900 CC test/nvme/aer/aer.o 00:02:15.900 CC test/nvme/simple_copy/simple_copy.o 00:02:16.179 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:16.179 CC test/blobfs/mkfs/mkfs.o 00:02:16.179 CC test/nvme/e2edp/nvme_dp.o 00:02:16.179 CC test/nvme/fused_ordering/fused_ordering.o 00:02:16.179 CC test/nvme/reset/reset.o 00:02:16.179 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.179 CC test/nvme/connect_stress/connect_stress.o 00:02:16.179 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:16.179 CC test/nvme/fdp/fdp.o 00:02:16.179 CC test/nvme/cuse/cuse.o 00:02:16.179 CC test/nvme/err_injection/err_injection.o 00:02:16.179 CC examples/accel/perf/accel_perf.o 00:02:16.179 CXX test/cpp_headers/ioat_spec.o 00:02:16.179 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:16.179 CC test/nvme/boot_partition/boot_partition.o 00:02:16.179 CC test/app/bdev_svc/bdev_svc.o 00:02:16.179 CC examples/nvmf/nvmf/nvmf.o 00:02:16.179 CC test/bdev/bdevio/bdevio.o 00:02:16.179 CC test/nvme/compliance/nvme_compliance.o 00:02:16.179 CC examples/blob/cli/blobcli.o 00:02:16.179 CC test/dma/test_dma/test_dma.o 00:02:16.179 CC test/event/scheduler/scheduler.o 00:02:16.179 CC examples/bdev/hello_world/hello_bdev.o 00:02:16.179 CC app/fio/bdev/fio_plugin.o 00:02:16.179 CC examples/thread/thread/thread_ex.o 00:02:16.179 LINK spdk_lspci 00:02:16.179 CC test/lvol/esnap/esnap.o 00:02:16.450 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:16.450 CC test/env/mem_callbacks/mem_callbacks.o 00:02:16.450 LINK rpc_client_test 00:02:16.450 LINK interrupt_tgt 00:02:16.450 LINK nvmf_tgt 00:02:16.450 LINK spdk_trace_record 00:02:16.450 LINK vhost 00:02:16.450 LINK lsvmd 00:02:16.450 LINK event_perf 00:02:16.450 LINK reactor_perf 00:02:16.719 LINK vtophys 00:02:16.719 LINK zipf 00:02:16.719 LINK env_dpdk_post_init 00:02:16.719 LINK startup 00:02:16.719 LINK spdk_nvme_discover 00:02:16.719 CXX test/cpp_headers/iscsi_spec.o 00:02:16.719 CXX test/cpp_headers/json.o 00:02:16.719 CXX test/cpp_headers/jsonrpc.o 00:02:16.719 LINK iscsi_tgt 00:02:16.719 CXX test/cpp_headers/likely.o 00:02:16.719 CXX test/cpp_headers/log.o 00:02:16.719 CXX test/cpp_headers/lvol.o 00:02:16.719 LINK pmr_persistence 00:02:16.719 CXX test/cpp_headers/memory.o 00:02:16.719 CXX test/cpp_headers/mmio.o 00:02:16.719 CXX test/cpp_headers/nbd.o 00:02:16.719 CXX test/cpp_headers/notify.o 00:02:16.719 CXX test/cpp_headers/nvme.o 00:02:16.719 CXX test/cpp_headers/nvme_intel.o 00:02:16.719 CXX test/cpp_headers/nvme_ocssd.o 00:02:16.719 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:16.719 LINK jsoncat 00:02:16.719 LINK led 00:02:16.719 LINK ioat_perf 00:02:16.719 CXX test/cpp_headers/nvme_spec.o 00:02:16.719 CXX test/cpp_headers/nvme_zns.o 00:02:16.719 LINK connect_stress 00:02:16.719 LINK hello_world 00:02:16.719 CXX test/cpp_headers/nvmf_cmd.o 00:02:16.719 LINK histogram_perf 00:02:16.719 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:16.719 CXX test/cpp_headers/nvmf.o 00:02:16.720 LINK poller_perf 00:02:16.720 LINK spdk_tgt 00:02:16.720 CXX test/cpp_headers/nvmf_spec.o 00:02:16.720 LINK hotplug 00:02:16.720 CXX test/cpp_headers/nvmf_transport.o 00:02:16.720 LINK app_repeat 00:02:16.720 LINK reactor 00:02:16.720 LINK cmb_copy 00:02:16.720 LINK overhead 00:02:16.720 CXX test/cpp_headers/opal.o 00:02:16.720 LINK stub 00:02:16.720 CXX test/cpp_headers/opal_spec.o 00:02:16.720 LINK reset 00:02:16.720 LINK nvme_dp 00:02:16.720 LINK hello_blob 00:02:16.720 LINK bdev_svc 00:02:16.720 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:16.720 LINK boot_partition 00:02:16.720 CXX test/cpp_headers/pci_ids.o 00:02:16.720 LINK hello_bdev 00:02:16.720 CXX test/cpp_headers/pipe.o 00:02:16.720 LINK sgl 00:02:16.980 LINK aer 00:02:16.980 CXX test/cpp_headers/queue.o 00:02:16.980 CXX test/cpp_headers/reduce.o 00:02:16.980 LINK err_injection 00:02:16.980 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:16.980 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:16.980 LINK mkfs 00:02:16.980 LINK fused_ordering 00:02:16.980 LINK doorbell_aers 00:02:16.980 CXX test/cpp_headers/scheduler.o 00:02:16.980 CXX test/cpp_headers/rpc.o 00:02:16.980 CXX test/cpp_headers/scsi.o 00:02:16.980 CXX test/cpp_headers/sock.o 00:02:16.980 CXX test/cpp_headers/scsi_spec.o 00:02:16.980 LINK verify 00:02:16.980 CXX test/cpp_headers/stdinc.o 00:02:16.980 CXX test/cpp_headers/string.o 00:02:16.980 CXX test/cpp_headers/thread.o 00:02:16.980 CXX test/cpp_headers/trace_parser.o 00:02:16.980 CXX test/cpp_headers/trace.o 00:02:16.980 CXX test/cpp_headers/tree.o 00:02:16.980 CXX test/cpp_headers/ublk.o 00:02:16.980 CXX test/cpp_headers/util.o 00:02:16.980 LINK hello_sock 00:02:16.980 CXX test/cpp_headers/version.o 00:02:16.980 CXX test/cpp_headers/uuid.o 00:02:16.980 CXX test/cpp_headers/vfio_user_pci.o 00:02:16.980 CXX test/cpp_headers/vfio_user_spec.o 00:02:16.980 CXX test/cpp_headers/vhost.o 00:02:16.980 CXX test/cpp_headers/vmd.o 00:02:16.980 LINK dif 00:02:16.980 CXX test/cpp_headers/xor.o 00:02:16.980 LINK scheduler 00:02:16.980 CXX test/cpp_headers/zipf.o 00:02:16.980 LINK reserve 00:02:16.980 LINK simple_copy 00:02:16.980 LINK spdk_dd 00:02:16.980 LINK nvme_compliance 00:02:16.980 LINK thread 00:02:16.980 LINK accel_perf 00:02:17.239 LINK nvmf 00:02:17.239 LINK idxd_perf 00:02:17.239 LINK arbitration 00:02:17.239 LINK abort 00:02:17.239 LINK reconnect 00:02:17.239 LINK spdk_trace 00:02:17.239 LINK test_dma 00:02:17.239 LINK bdevio 00:02:17.239 LINK spdk_nvme_identify 00:02:17.497 LINK fdp 00:02:17.497 LINK spdk_nvme 00:02:17.497 LINK mem_callbacks 00:02:17.497 LINK nvme_fuzz 00:02:17.497 LINK spdk_nvme_perf 00:02:17.497 LINK nvme_manage 00:02:17.497 LINK spdk_bdev 00:02:17.497 LINK blobcli 00:02:17.497 LINK pci_ut 00:02:17.756 LINK memory_ut 00:02:17.756 LINK bdevperf 00:02:17.756 LINK vhost_fuzz 00:02:18.014 LINK spdk_top 00:02:18.014 LINK cuse 00:02:18.951 LINK iscsi_fuzz 00:02:21.485 LINK esnap 00:02:21.744 00:02:21.744 real 0m41.055s 00:02:21.744 user 6m38.128s 00:02:21.744 sys 3m3.306s 00:02:21.744 17:13:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.744 17:13:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.744 ************************************ 00:02:21.744 END TEST make 00:02:21.744 ************************************ 00:02:21.744 17:13:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:21.744 17:13:00 -- nvmf/common.sh@7 -- # uname -s 00:02:21.744 17:13:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:21.744 17:13:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:21.744 17:13:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:21.744 17:13:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:21.744 17:13:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:21.744 17:13:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:21.744 17:13:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:21.744 17:13:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:21.744 17:13:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:21.744 17:13:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:22.003 17:13:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:22.003 17:13:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:22.003 17:13:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:22.003 17:13:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:22.003 17:13:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:22.003 17:13:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:22.003 17:13:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:22.003 17:13:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.003 17:13:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.003 17:13:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.003 17:13:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.003 17:13:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.003 17:13:00 -- paths/export.sh@5 -- # export PATH 00:02:22.003 17:13:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.003 17:13:00 -- nvmf/common.sh@46 -- # : 0 00:02:22.003 17:13:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:22.003 17:13:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:22.003 17:13:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:22.003 17:13:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:22.003 17:13:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:22.003 17:13:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:22.003 17:13:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:22.003 17:13:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:22.003 17:13:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:22.003 17:13:00 -- spdk/autotest.sh@32 -- # uname -s 00:02:22.003 17:13:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:22.004 17:13:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:22.004 17:13:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:22.004 17:13:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:22.004 17:13:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:22.004 17:13:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:22.004 17:13:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:22.004 17:13:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:22.004 17:13:00 -- spdk/autotest.sh@48 -- # udevadm_pid=3861304 00:02:22.004 17:13:00 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:22.004 17:13:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:22.004 17:13:00 -- spdk/autotest.sh@54 -- # echo 3861306 00:02:22.004 17:13:00 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:22.004 17:13:00 -- spdk/autotest.sh@56 -- # echo 3861307 00:02:22.004 17:13:00 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:22.004 17:13:00 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:22.004 17:13:00 -- spdk/autotest.sh@60 -- # echo 3861308 00:02:22.004 17:13:00 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:22.004 17:13:00 -- spdk/autotest.sh@62 -- # echo 3861309 00:02:22.004 17:13:00 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:22.004 17:13:00 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:22.004 17:13:00 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:22.004 17:13:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:22.004 17:13:00 -- common/autotest_common.sh@10 -- # set +x 00:02:22.004 17:13:00 -- spdk/autotest.sh@70 -- # create_test_list 00:02:22.004 17:13:00 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:22.004 17:13:00 -- common/autotest_common.sh@10 -- # set +x 00:02:22.004 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:22.004 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:22.004 17:13:00 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:22.004 17:13:00 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.004 17:13:00 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.004 17:13:00 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:22.004 17:13:00 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.004 17:13:00 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:22.004 17:13:00 -- common/autotest_common.sh@1440 -- # uname 00:02:22.004 17:13:00 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:22.004 17:13:00 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:22.004 17:13:00 -- common/autotest_common.sh@1460 -- # uname 00:02:22.004 17:13:00 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:22.004 17:13:00 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:22.004 17:13:00 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:22.004 17:13:00 -- spdk/autotest.sh@83 -- # hash lcov 00:02:22.004 17:13:00 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:22.004 17:13:00 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:22.004 --rc lcov_branch_coverage=1 00:02:22.004 --rc lcov_function_coverage=1 00:02:22.004 --rc genhtml_branch_coverage=1 00:02:22.004 --rc genhtml_function_coverage=1 00:02:22.004 --rc genhtml_legend=1 00:02:22.004 --rc geninfo_all_blocks=1 00:02:22.004 ' 00:02:22.004 17:13:00 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:22.004 --rc lcov_branch_coverage=1 00:02:22.004 --rc lcov_function_coverage=1 00:02:22.004 --rc genhtml_branch_coverage=1 00:02:22.004 --rc genhtml_function_coverage=1 00:02:22.004 --rc genhtml_legend=1 00:02:22.004 --rc geninfo_all_blocks=1 00:02:22.004 ' 00:02:22.004 17:13:00 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:22.004 --rc lcov_branch_coverage=1 00:02:22.004 --rc lcov_function_coverage=1 00:02:22.004 --rc genhtml_branch_coverage=1 00:02:22.004 --rc genhtml_function_coverage=1 00:02:22.004 --rc genhtml_legend=1 00:02:22.004 --rc geninfo_all_blocks=1 00:02:22.004 --no-external' 00:02:22.004 17:13:00 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:22.004 --rc lcov_branch_coverage=1 00:02:22.004 --rc lcov_function_coverage=1 00:02:22.004 --rc genhtml_branch_coverage=1 00:02:22.004 --rc genhtml_function_coverage=1 00:02:22.004 --rc genhtml_legend=1 00:02:22.004 --rc geninfo_all_blocks=1 00:02:22.004 --no-external' 00:02:22.004 17:13:00 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:22.004 lcov: LCOV version 1.14 00:02:22.004 17:13:00 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:23.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:23.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:23.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:23.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:23.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:23.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:23.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:23.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:23.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:23.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:23.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:23.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:24.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:24.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:24.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:24.428 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:24.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:24.428 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:39.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:39.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:39.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:39.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:39.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:39.381 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:57.462 17:13:33 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:57.462 17:13:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:57.462 17:13:33 -- common/autotest_common.sh@10 -- # set +x 00:02:57.463 17:13:33 -- spdk/autotest.sh@102 -- # rm -f 00:02:57.463 17:13:33 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.030 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:02:58.030 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:58.030 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:58.288 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:58.288 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:58.288 17:13:37 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:58.288 17:13:37 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:58.288 17:13:37 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:58.288 17:13:37 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:58.288 17:13:37 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:58.288 17:13:37 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:58.288 17:13:37 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:58.288 17:13:37 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.288 17:13:37 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:58.288 17:13:37 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:58.288 17:13:37 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:58.288 17:13:37 -- spdk/autotest.sh@121 -- # grep -v p 00:02:58.288 17:13:37 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:58.288 17:13:37 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:58.288 17:13:37 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:58.288 17:13:37 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:58.288 17:13:37 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:58.288 No valid GPT data, bailing 00:02:58.288 17:13:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:58.288 17:13:37 -- scripts/common.sh@393 -- # pt= 00:02:58.288 17:13:37 -- scripts/common.sh@394 -- # return 1 00:02:58.288 17:13:37 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:58.288 1+0 records in 00:02:58.288 1+0 records out 00:02:58.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461396 s, 227 MB/s 00:02:58.288 17:13:37 -- spdk/autotest.sh@129 -- # sync 00:02:58.288 17:13:37 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:58.288 17:13:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:58.288 17:13:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:04.847 17:13:43 -- spdk/autotest.sh@135 -- # uname -s 00:03:04.847 17:13:43 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:04.847 17:13:43 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.847 17:13:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.847 17:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.847 17:13:43 -- common/autotest_common.sh@10 -- # set +x 00:03:04.847 ************************************ 00:03:04.847 START TEST setup.sh 00:03:04.847 ************************************ 00:03:04.847 17:13:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.847 * Looking for test storage... 00:03:04.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.847 17:13:43 -- setup/test-setup.sh@10 -- # uname -s 00:03:04.847 17:13:43 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:04.847 17:13:43 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:04.847 17:13:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.847 17:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.847 17:13:43 -- common/autotest_common.sh@10 -- # set +x 00:03:04.847 ************************************ 00:03:04.847 START TEST acl 00:03:04.847 ************************************ 00:03:04.847 17:13:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:04.847 * Looking for test storage... 00:03:04.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.847 17:13:43 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:04.847 17:13:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:04.847 17:13:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:04.847 17:13:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:04.847 17:13:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:04.847 17:13:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:04.847 17:13:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:04.847 17:13:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.847 17:13:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:04.847 17:13:43 -- setup/acl.sh@12 -- # devs=() 00:03:04.847 17:13:43 -- setup/acl.sh@12 -- # declare -a devs 00:03:04.847 17:13:43 -- setup/acl.sh@13 -- # drivers=() 00:03:04.847 17:13:43 -- setup/acl.sh@13 -- # declare -A drivers 00:03:04.847 17:13:43 -- setup/acl.sh@51 -- # setup reset 00:03:04.847 17:13:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.847 17:13:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.133 17:13:46 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:08.133 17:13:46 -- setup/acl.sh@16 -- # local dev driver 00:03:08.133 17:13:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.133 17:13:46 -- setup/acl.sh@15 -- # setup output status 00:03:08.133 17:13:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.133 17:13:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:10.668 Hugepages 00:03:10.668 node hugesize free / total 00:03:10.668 17:13:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 00:03:10.669 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # continue 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:10.669 17:13:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:10.669 17:13:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:10.669 17:13:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:10.669 17:13:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.669 17:13:49 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:10.669 17:13:49 -- setup/acl.sh@54 -- # run_test denied denied 00:03:10.669 17:13:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:10.669 17:13:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.669 17:13:49 -- common/autotest_common.sh@10 -- # set +x 00:03:10.669 ************************************ 00:03:10.669 START TEST denied 00:03:10.669 ************************************ 00:03:10.669 17:13:49 -- common/autotest_common.sh@1104 -- # denied 00:03:10.669 17:13:49 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:03:10.669 17:13:49 -- setup/acl.sh@38 -- # setup output config 00:03:10.669 17:13:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:03:10.669 17:13:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.669 17:13:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.955 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:03:13.955 17:13:52 -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:03:13.956 17:13:52 -- setup/acl.sh@28 -- # local dev driver 00:03:13.956 17:13:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:13.956 17:13:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:03:13.956 17:13:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:03:13.956 17:13:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:13.956 17:13:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:13.956 17:13:52 -- setup/acl.sh@41 -- # setup reset 00:03:13.956 17:13:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.956 17:13:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.144 00:03:18.144 real 0m7.330s 00:03:18.144 user 0m2.458s 00:03:18.144 sys 0m4.160s 00:03:18.144 17:13:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.145 17:13:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.145 ************************************ 00:03:18.145 END TEST denied 00:03:18.145 ************************************ 00:03:18.145 17:13:56 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:18.145 17:13:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:18.145 17:13:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:18.145 17:13:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.145 ************************************ 00:03:18.145 START TEST allowed 00:03:18.145 ************************************ 00:03:18.145 17:13:56 -- common/autotest_common.sh@1104 -- # allowed 00:03:18.145 17:13:56 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:03:18.145 17:13:56 -- setup/acl.sh@45 -- # setup output config 00:03:18.145 17:13:56 -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:03:18.145 17:13:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.145 17:13:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.332 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.333 17:14:00 -- setup/acl.sh@47 -- # verify 00:03:22.333 17:14:00 -- setup/acl.sh@28 -- # local dev driver 00:03:22.333 17:14:00 -- setup/acl.sh@48 -- # setup reset 00:03:22.333 17:14:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.333 17:14:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.659 00:03:25.659 real 0m7.296s 00:03:25.659 user 0m2.260s 00:03:25.659 sys 0m4.141s 00:03:25.659 17:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.659 17:14:04 -- common/autotest_common.sh@10 -- # set +x 00:03:25.659 ************************************ 00:03:25.659 END TEST allowed 00:03:25.659 ************************************ 00:03:25.659 00:03:25.659 real 0m20.936s 00:03:25.659 user 0m7.028s 00:03:25.659 sys 0m12.519s 00:03:25.659 17:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.659 17:14:04 -- common/autotest_common.sh@10 -- # set +x 00:03:25.659 ************************************ 00:03:25.659 END TEST acl 00:03:25.659 ************************************ 00:03:25.659 17:14:04 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:25.659 17:14:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.659 17:14:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.659 17:14:04 -- common/autotest_common.sh@10 -- # set +x 00:03:25.659 ************************************ 00:03:25.659 START TEST hugepages 00:03:25.659 ************************************ 00:03:25.659 17:14:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:25.659 * Looking for test storage... 00:03:25.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.659 17:14:04 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:25.659 17:14:04 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:25.659 17:14:04 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:25.659 17:14:04 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:25.659 17:14:04 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:25.659 17:14:04 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:25.659 17:14:04 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:25.659 17:14:04 -- setup/common.sh@18 -- # local node= 00:03:25.659 17:14:04 -- setup/common.sh@19 -- # local var val 00:03:25.660 17:14:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.660 17:14:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.660 17:14:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.660 17:14:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.660 17:14:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.660 17:14:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 68247988 kB' 'MemAvailable: 71713460 kB' 'Buffers: 2704 kB' 'Cached: 15858884 kB' 'SwapCached: 0 kB' 'Active: 13001308 kB' 'Inactive: 3528960 kB' 'Active(anon): 12548996 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 671988 kB' 'Mapped: 208676 kB' 'Shmem: 11880316 kB' 'KReclaimable: 275188 kB' 'Slab: 908108 kB' 'SReclaimable: 275188 kB' 'SUnreclaim: 632920 kB' 'KernelStack: 22784 kB' 'PageTables: 9760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434752 kB' 'Committed_AS: 13981140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219572 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.660 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.660 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # continue 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.661 17:14:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.661 17:14:04 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.661 17:14:04 -- setup/common.sh@33 -- # echo 2048 00:03:25.661 17:14:04 -- setup/common.sh@33 -- # return 0 00:03:25.661 17:14:04 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:25.661 17:14:04 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:25.661 17:14:04 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:25.661 17:14:04 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:25.661 17:14:04 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:25.661 17:14:04 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:25.661 17:14:04 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:25.661 17:14:04 -- setup/hugepages.sh@207 -- # get_nodes 00:03:25.661 17:14:04 -- setup/hugepages.sh@27 -- # local node 00:03:25.661 17:14:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.661 17:14:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:25.661 17:14:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.661 17:14:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:25.661 17:14:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.661 17:14:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.661 17:14:04 -- setup/hugepages.sh@208 -- # clear_hp 00:03:25.661 17:14:04 -- setup/hugepages.sh@37 -- # local node hp 00:03:25.661 17:14:04 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.661 17:14:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.661 17:14:04 -- setup/hugepages.sh@41 -- # echo 0 00:03:25.661 17:14:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.661 17:14:04 -- setup/hugepages.sh@41 -- # echo 0 00:03:25.661 17:14:04 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.661 17:14:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.661 17:14:04 -- setup/hugepages.sh@41 -- # echo 0 00:03:25.661 17:14:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.661 17:14:04 -- setup/hugepages.sh@41 -- # echo 0 00:03:25.661 17:14:04 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:25.661 17:14:04 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:25.661 17:14:04 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:25.661 17:14:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.661 17:14:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.661 17:14:04 -- common/autotest_common.sh@10 -- # set +x 00:03:25.661 ************************************ 00:03:25.661 START TEST default_setup 00:03:25.661 ************************************ 00:03:25.661 17:14:04 -- common/autotest_common.sh@1104 -- # default_setup 00:03:25.661 17:14:04 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:25.661 17:14:04 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.661 17:14:04 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:25.661 17:14:04 -- setup/hugepages.sh@51 -- # shift 00:03:25.661 17:14:04 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:25.661 17:14:04 -- setup/hugepages.sh@52 -- # local node_ids 00:03:25.661 17:14:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.661 17:14:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.661 17:14:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:25.661 17:14:04 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:25.661 17:14:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.661 17:14:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.661 17:14:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.661 17:14:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.661 17:14:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.661 17:14:04 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:25.661 17:14:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:25.661 17:14:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:25.661 17:14:04 -- setup/hugepages.sh@73 -- # return 0 00:03:25.661 17:14:04 -- setup/hugepages.sh@137 -- # setup output 00:03:25.661 17:14:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.661 17:14:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.947 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.947 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:29.516 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:29.516 17:14:08 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:29.516 17:14:08 -- setup/hugepages.sh@89 -- # local node 00:03:29.516 17:14:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.516 17:14:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.516 17:14:08 -- setup/hugepages.sh@92 -- # local surp 00:03:29.516 17:14:08 -- setup/hugepages.sh@93 -- # local resv 00:03:29.516 17:14:08 -- setup/hugepages.sh@94 -- # local anon 00:03:29.516 17:14:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.516 17:14:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.516 17:14:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.516 17:14:08 -- setup/common.sh@18 -- # local node= 00:03:29.516 17:14:08 -- setup/common.sh@19 -- # local var val 00:03:29.516 17:14:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.516 17:14:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.516 17:14:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.516 17:14:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.516 17:14:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.516 17:14:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70384772 kB' 'MemAvailable: 73850176 kB' 'Buffers: 2704 kB' 'Cached: 15859000 kB' 'SwapCached: 0 kB' 'Active: 13019356 kB' 'Inactive: 3528960 kB' 'Active(anon): 12567044 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689736 kB' 'Mapped: 208532 kB' 'Shmem: 11880432 kB' 'KReclaimable: 275052 kB' 'Slab: 907104 kB' 'SReclaimable: 275052 kB' 'SUnreclaim: 632052 kB' 'KernelStack: 22880 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 14001108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219556 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.516 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.516 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.517 17:14:08 -- setup/common.sh@33 -- # echo 0 00:03:29.517 17:14:08 -- setup/common.sh@33 -- # return 0 00:03:29.517 17:14:08 -- setup/hugepages.sh@97 -- # anon=0 00:03:29.517 17:14:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.517 17:14:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.517 17:14:08 -- setup/common.sh@18 -- # local node= 00:03:29.517 17:14:08 -- setup/common.sh@19 -- # local var val 00:03:29.517 17:14:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.517 17:14:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.517 17:14:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.517 17:14:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.517 17:14:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.517 17:14:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70384000 kB' 'MemAvailable: 73849404 kB' 'Buffers: 2704 kB' 'Cached: 15859004 kB' 'SwapCached: 0 kB' 'Active: 13018744 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566432 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689592 kB' 'Mapped: 208532 kB' 'Shmem: 11880436 kB' 'KReclaimable: 275052 kB' 'Slab: 907068 kB' 'SReclaimable: 275052 kB' 'SUnreclaim: 632016 kB' 'KernelStack: 22864 kB' 'PageTables: 10012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 14002404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219588 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.517 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.517 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.518 17:14:08 -- setup/common.sh@33 -- # echo 0 00:03:29.518 17:14:08 -- setup/common.sh@33 -- # return 0 00:03:29.518 17:14:08 -- setup/hugepages.sh@99 -- # surp=0 00:03:29.518 17:14:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.518 17:14:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.518 17:14:08 -- setup/common.sh@18 -- # local node= 00:03:29.518 17:14:08 -- setup/common.sh@19 -- # local var val 00:03:29.518 17:14:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.518 17:14:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.518 17:14:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.518 17:14:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.518 17:14:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.518 17:14:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70382132 kB' 'MemAvailable: 73847536 kB' 'Buffers: 2704 kB' 'Cached: 15859016 kB' 'SwapCached: 0 kB' 'Active: 13018824 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566512 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689508 kB' 'Mapped: 208532 kB' 'Shmem: 11880448 kB' 'KReclaimable: 275052 kB' 'Slab: 907140 kB' 'SReclaimable: 275052 kB' 'SUnreclaim: 632088 kB' 'KernelStack: 22816 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 14000908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219604 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.518 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.518 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.518 17:14:08 -- setup/common.sh@33 -- # echo 0 00:03:29.518 17:14:08 -- setup/common.sh@33 -- # return 0 00:03:29.518 17:14:08 -- setup/hugepages.sh@100 -- # resv=0 00:03:29.518 17:14:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.518 nr_hugepages=1024 00:03:29.518 17:14:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.518 resv_hugepages=0 00:03:29.518 17:14:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.518 surplus_hugepages=0 00:03:29.518 17:14:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.518 anon_hugepages=0 00:03:29.518 17:14:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.518 17:14:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.518 17:14:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.518 17:14:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.518 17:14:08 -- setup/common.sh@18 -- # local node= 00:03:29.518 17:14:08 -- setup/common.sh@19 -- # local var val 00:03:29.518 17:14:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.518 17:14:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.518 17:14:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.519 17:14:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.519 17:14:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.519 17:14:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70382132 kB' 'MemAvailable: 73847536 kB' 'Buffers: 2704 kB' 'Cached: 15859028 kB' 'SwapCached: 0 kB' 'Active: 13019016 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566704 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689656 kB' 'Mapped: 208532 kB' 'Shmem: 11880460 kB' 'KReclaimable: 275052 kB' 'Slab: 907140 kB' 'SReclaimable: 275052 kB' 'SUnreclaim: 632088 kB' 'KernelStack: 22992 kB' 'PageTables: 10288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 14002432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219636 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.519 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.519 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.520 17:14:08 -- setup/common.sh@33 -- # echo 1024 00:03:29.520 17:14:08 -- setup/common.sh@33 -- # return 0 00:03:29.520 17:14:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.520 17:14:08 -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.520 17:14:08 -- setup/hugepages.sh@27 -- # local node 00:03:29.520 17:14:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.520 17:14:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.520 17:14:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.520 17:14:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.520 17:14:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.520 17:14:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.520 17:14:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.520 17:14:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.520 17:14:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.520 17:14:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.520 17:14:08 -- setup/common.sh@18 -- # local node=0 00:03:29.520 17:14:08 -- setup/common.sh@19 -- # local var val 00:03:29.520 17:14:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.520 17:14:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.520 17:14:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.520 17:14:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.520 17:14:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.520 17:14:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 39735384 kB' 'MemUsed: 8333012 kB' 'SwapCached: 0 kB' 'Active: 4985508 kB' 'Inactive: 228300 kB' 'Active(anon): 4858988 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5064880 kB' 'Mapped: 41236 kB' 'AnonPages: 152092 kB' 'Shmem: 4710060 kB' 'KernelStack: 11976 kB' 'PageTables: 3184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124336 kB' 'Slab: 402348 kB' 'SReclaimable: 124336 kB' 'SUnreclaim: 278012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.520 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.520 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # continue 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.521 17:14:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.521 17:14:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.521 17:14:08 -- setup/common.sh@33 -- # echo 0 00:03:29.521 17:14:08 -- setup/common.sh@33 -- # return 0 00:03:29.779 17:14:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.779 17:14:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.779 17:14:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.779 17:14:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.779 17:14:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.779 node0=1024 expecting 1024 00:03:29.779 17:14:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.779 00:03:29.779 real 0m4.172s 00:03:29.779 user 0m1.348s 00:03:29.779 sys 0m2.044s 00:03:29.779 17:14:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.779 17:14:08 -- common/autotest_common.sh@10 -- # set +x 00:03:29.779 ************************************ 00:03:29.779 END TEST default_setup 00:03:29.779 ************************************ 00:03:29.779 17:14:08 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:29.779 17:14:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.779 17:14:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.779 17:14:08 -- common/autotest_common.sh@10 -- # set +x 00:03:29.779 ************************************ 00:03:29.779 START TEST per_node_1G_alloc 00:03:29.779 ************************************ 00:03:29.779 17:14:08 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:29.779 17:14:08 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:29.779 17:14:08 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:29.779 17:14:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:29.779 17:14:08 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:29.779 17:14:08 -- setup/hugepages.sh@51 -- # shift 00:03:29.779 17:14:08 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:29.779 17:14:08 -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.779 17:14:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.779 17:14:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:29.779 17:14:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:29.779 17:14:08 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:29.779 17:14:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.779 17:14:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:29.779 17:14:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.779 17:14:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.779 17:14:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.779 17:14:08 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:29.779 17:14:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.779 17:14:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:29.779 17:14:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.779 17:14:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:29.779 17:14:08 -- setup/hugepages.sh@73 -- # return 0 00:03:29.779 17:14:08 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:29.779 17:14:08 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:29.779 17:14:08 -- setup/hugepages.sh@146 -- # setup output 00:03:29.779 17:14:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.779 17:14:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.065 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.065 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.065 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.065 17:14:11 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:33.065 17:14:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:33.065 17:14:11 -- setup/hugepages.sh@89 -- # local node 00:03:33.065 17:14:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.065 17:14:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.065 17:14:11 -- setup/hugepages.sh@92 -- # local surp 00:03:33.065 17:14:11 -- setup/hugepages.sh@93 -- # local resv 00:03:33.065 17:14:11 -- setup/hugepages.sh@94 -- # local anon 00:03:33.065 17:14:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.065 17:14:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.065 17:14:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.065 17:14:11 -- setup/common.sh@18 -- # local node= 00:03:33.065 17:14:11 -- setup/common.sh@19 -- # local var val 00:03:33.065 17:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.065 17:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.065 17:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.065 17:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.065 17:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.065 17:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70387600 kB' 'MemAvailable: 73852996 kB' 'Buffers: 2704 kB' 'Cached: 15859120 kB' 'SwapCached: 0 kB' 'Active: 13021404 kB' 'Inactive: 3528960 kB' 'Active(anon): 12569092 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 691960 kB' 'Mapped: 208488 kB' 'Shmem: 11880552 kB' 'KReclaimable: 275036 kB' 'Slab: 906932 kB' 'SReclaimable: 275036 kB' 'SUnreclaim: 631896 kB' 'KernelStack: 22976 kB' 'PageTables: 10348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 14003040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219860 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.065 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.065 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.066 17:14:11 -- setup/common.sh@33 -- # echo 0 00:03:33.066 17:14:11 -- setup/common.sh@33 -- # return 0 00:03:33.066 17:14:11 -- setup/hugepages.sh@97 -- # anon=0 00:03:33.066 17:14:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.066 17:14:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.066 17:14:11 -- setup/common.sh@18 -- # local node= 00:03:33.066 17:14:11 -- setup/common.sh@19 -- # local var val 00:03:33.066 17:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.066 17:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.066 17:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.066 17:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.066 17:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.066 17:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70398664 kB' 'MemAvailable: 73864060 kB' 'Buffers: 2704 kB' 'Cached: 15859124 kB' 'SwapCached: 0 kB' 'Active: 13020920 kB' 'Inactive: 3528960 kB' 'Active(anon): 12568608 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 691464 kB' 'Mapped: 208548 kB' 'Shmem: 11880556 kB' 'KReclaimable: 275036 kB' 'Slab: 906908 kB' 'SReclaimable: 275036 kB' 'SUnreclaim: 631872 kB' 'KernelStack: 22976 kB' 'PageTables: 10256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 14002804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219748 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.066 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.066 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.067 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.067 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.068 17:14:11 -- setup/common.sh@33 -- # echo 0 00:03:33.068 17:14:11 -- setup/common.sh@33 -- # return 0 00:03:33.068 17:14:11 -- setup/hugepages.sh@99 -- # surp=0 00:03:33.068 17:14:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.068 17:14:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.068 17:14:11 -- setup/common.sh@18 -- # local node= 00:03:33.068 17:14:11 -- setup/common.sh@19 -- # local var val 00:03:33.068 17:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.068 17:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.068 17:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.068 17:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.068 17:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.068 17:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70398696 kB' 'MemAvailable: 73864092 kB' 'Buffers: 2704 kB' 'Cached: 15859136 kB' 'SwapCached: 0 kB' 'Active: 13021132 kB' 'Inactive: 3528960 kB' 'Active(anon): 12568820 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 691660 kB' 'Mapped: 208488 kB' 'Shmem: 11880568 kB' 'KReclaimable: 275036 kB' 'Slab: 906964 kB' 'SReclaimable: 275036 kB' 'SUnreclaim: 631928 kB' 'KernelStack: 23056 kB' 'PageTables: 10440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 14003064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219764 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.068 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.068 17:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.069 17:14:11 -- setup/common.sh@33 -- # echo 0 00:03:33.069 17:14:11 -- setup/common.sh@33 -- # return 0 00:03:33.069 17:14:11 -- setup/hugepages.sh@100 -- # resv=0 00:03:33.069 17:14:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.069 nr_hugepages=1024 00:03:33.069 17:14:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.069 resv_hugepages=0 00:03:33.069 17:14:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.069 surplus_hugepages=0 00:03:33.069 17:14:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.069 anon_hugepages=0 00:03:33.069 17:14:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.069 17:14:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.069 17:14:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.069 17:14:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.069 17:14:11 -- setup/common.sh@18 -- # local node= 00:03:33.069 17:14:11 -- setup/common.sh@19 -- # local var val 00:03:33.069 17:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.069 17:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.069 17:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.069 17:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.069 17:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.069 17:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70404076 kB' 'MemAvailable: 73869472 kB' 'Buffers: 2704 kB' 'Cached: 15859152 kB' 'SwapCached: 0 kB' 'Active: 13017920 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565608 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688452 kB' 'Mapped: 207336 kB' 'Shmem: 11880584 kB' 'KReclaimable: 275036 kB' 'Slab: 906940 kB' 'SReclaimable: 275036 kB' 'SUnreclaim: 631904 kB' 'KernelStack: 22944 kB' 'PageTables: 9980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13988296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219700 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.069 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.069 17:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.070 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.070 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.071 17:14:11 -- setup/common.sh@33 -- # echo 1024 00:03:33.071 17:14:11 -- setup/common.sh@33 -- # return 0 00:03:33.071 17:14:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.071 17:14:11 -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.071 17:14:11 -- setup/hugepages.sh@27 -- # local node 00:03:33.071 17:14:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.071 17:14:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.071 17:14:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.071 17:14:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.071 17:14:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.071 17:14:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.071 17:14:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.071 17:14:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.071 17:14:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.071 17:14:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.071 17:14:11 -- setup/common.sh@18 -- # local node=0 00:03:33.071 17:14:11 -- setup/common.sh@19 -- # local var val 00:03:33.071 17:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.071 17:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.071 17:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.071 17:14:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.071 17:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.071 17:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40805692 kB' 'MemUsed: 7262704 kB' 'SwapCached: 0 kB' 'Active: 4986120 kB' 'Inactive: 228300 kB' 'Active(anon): 4859600 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5064964 kB' 'Mapped: 40836 kB' 'AnonPages: 152688 kB' 'Shmem: 4710144 kB' 'KernelStack: 12072 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124336 kB' 'Slab: 402192 kB' 'SReclaimable: 124336 kB' 'SUnreclaim: 277856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.071 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.071 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@33 -- # echo 0 00:03:33.072 17:14:11 -- setup/common.sh@33 -- # return 0 00:03:33.072 17:14:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.072 17:14:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.072 17:14:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.072 17:14:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:33.072 17:14:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.072 17:14:11 -- setup/common.sh@18 -- # local node=1 00:03:33.072 17:14:11 -- setup/common.sh@19 -- # local var val 00:03:33.072 17:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.072 17:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.072 17:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:33.072 17:14:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:33.072 17:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.072 17:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29598672 kB' 'MemUsed: 14619536 kB' 'SwapCached: 0 kB' 'Active: 8031356 kB' 'Inactive: 3300660 kB' 'Active(anon): 7705564 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3300660 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10796904 kB' 'Mapped: 166500 kB' 'AnonPages: 535268 kB' 'Shmem: 7170452 kB' 'KernelStack: 10776 kB' 'PageTables: 5932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150692 kB' 'Slab: 504708 kB' 'SReclaimable: 150692 kB' 'SUnreclaim: 354016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.072 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.072 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # continue 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.073 17:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.073 17:14:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.073 17:14:11 -- setup/common.sh@33 -- # echo 0 00:03:33.073 17:14:11 -- setup/common.sh@33 -- # return 0 00:03:33.073 17:14:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.073 17:14:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.073 17:14:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.073 17:14:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.073 17:14:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:33.073 node0=512 expecting 512 00:03:33.073 17:14:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.073 17:14:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.073 17:14:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.073 17:14:11 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:33.073 node1=512 expecting 512 00:03:33.073 17:14:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:33.073 00:03:33.073 real 0m3.165s 00:03:33.073 user 0m1.324s 00:03:33.073 sys 0m1.889s 00:03:33.073 17:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.073 17:14:11 -- common/autotest_common.sh@10 -- # set +x 00:03:33.073 ************************************ 00:03:33.073 END TEST per_node_1G_alloc 00:03:33.073 ************************************ 00:03:33.073 17:14:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:33.073 17:14:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.073 17:14:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.073 17:14:11 -- common/autotest_common.sh@10 -- # set +x 00:03:33.073 ************************************ 00:03:33.073 START TEST even_2G_alloc 00:03:33.073 ************************************ 00:03:33.073 17:14:11 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:33.073 17:14:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:33.073 17:14:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.073 17:14:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.073 17:14:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.073 17:14:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.073 17:14:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.073 17:14:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.073 17:14:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.073 17:14:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.073 17:14:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.073 17:14:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.073 17:14:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.073 17:14:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.073 17:14:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:33.073 17:14:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.073 17:14:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:33.073 17:14:11 -- setup/hugepages.sh@83 -- # : 512 00:03:33.073 17:14:11 -- setup/hugepages.sh@84 -- # : 1 00:03:33.073 17:14:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.073 17:14:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:33.073 17:14:11 -- setup/hugepages.sh@83 -- # : 0 00:03:33.073 17:14:11 -- setup/hugepages.sh@84 -- # : 0 00:03:33.073 17:14:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.073 17:14:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:33.073 17:14:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:33.073 17:14:11 -- setup/hugepages.sh@153 -- # setup output 00:03:33.073 17:14:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.073 17:14:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.618 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.618 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.618 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.618 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.618 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.618 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.618 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.618 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.618 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.618 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.878 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.879 17:14:14 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:35.879 17:14:14 -- setup/hugepages.sh@89 -- # local node 00:03:35.879 17:14:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.879 17:14:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.879 17:14:14 -- setup/hugepages.sh@92 -- # local surp 00:03:35.879 17:14:14 -- setup/hugepages.sh@93 -- # local resv 00:03:35.879 17:14:14 -- setup/hugepages.sh@94 -- # local anon 00:03:35.879 17:14:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.879 17:14:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.879 17:14:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.879 17:14:14 -- setup/common.sh@18 -- # local node= 00:03:35.879 17:14:14 -- setup/common.sh@19 -- # local var val 00:03:35.879 17:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.879 17:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.879 17:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.879 17:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.879 17:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.879 17:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70395544 kB' 'MemAvailable: 73860936 kB' 'Buffers: 2704 kB' 'Cached: 15859236 kB' 'SwapCached: 0 kB' 'Active: 13016988 kB' 'Inactive: 3528960 kB' 'Active(anon): 12564676 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687464 kB' 'Mapped: 207476 kB' 'Shmem: 11880668 kB' 'KReclaimable: 275028 kB' 'Slab: 906492 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 631464 kB' 'KernelStack: 22704 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13984400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219604 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.879 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.879 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.880 17:14:14 -- setup/common.sh@33 -- # echo 0 00:03:35.880 17:14:14 -- setup/common.sh@33 -- # return 0 00:03:35.880 17:14:14 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.880 17:14:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.880 17:14:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.880 17:14:14 -- setup/common.sh@18 -- # local node= 00:03:35.880 17:14:14 -- setup/common.sh@19 -- # local var val 00:03:35.880 17:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.880 17:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.880 17:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.880 17:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.880 17:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.880 17:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70397260 kB' 'MemAvailable: 73862652 kB' 'Buffers: 2704 kB' 'Cached: 15859236 kB' 'SwapCached: 0 kB' 'Active: 13017084 kB' 'Inactive: 3528960 kB' 'Active(anon): 12564772 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687476 kB' 'Mapped: 207384 kB' 'Shmem: 11880668 kB' 'KReclaimable: 275028 kB' 'Slab: 906472 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 631444 kB' 'KernelStack: 22672 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13984412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219588 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.880 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.880 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.881 17:14:14 -- setup/common.sh@33 -- # echo 0 00:03:35.881 17:14:14 -- setup/common.sh@33 -- # return 0 00:03:35.881 17:14:14 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.881 17:14:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.881 17:14:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.881 17:14:14 -- setup/common.sh@18 -- # local node= 00:03:35.881 17:14:14 -- setup/common.sh@19 -- # local var val 00:03:35.881 17:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.881 17:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.881 17:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.881 17:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.881 17:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.881 17:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70397352 kB' 'MemAvailable: 73862744 kB' 'Buffers: 2704 kB' 'Cached: 15859236 kB' 'SwapCached: 0 kB' 'Active: 13016740 kB' 'Inactive: 3528960 kB' 'Active(anon): 12564428 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687132 kB' 'Mapped: 207384 kB' 'Shmem: 11880668 kB' 'KReclaimable: 275028 kB' 'Slab: 906472 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 631444 kB' 'KernelStack: 22656 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13984424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219588 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.881 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.881 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.882 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.882 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.882 17:14:14 -- setup/common.sh@33 -- # echo 0 00:03:35.882 17:14:14 -- setup/common.sh@33 -- # return 0 00:03:35.882 17:14:14 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.882 17:14:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.882 nr_hugepages=1024 00:03:35.882 17:14:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.882 resv_hugepages=0 00:03:35.882 17:14:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.882 surplus_hugepages=0 00:03:35.882 17:14:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.882 anon_hugepages=0 00:03:35.882 17:14:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.882 17:14:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.882 17:14:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.883 17:14:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.883 17:14:14 -- setup/common.sh@18 -- # local node= 00:03:35.883 17:14:14 -- setup/common.sh@19 -- # local var val 00:03:35.883 17:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.883 17:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.883 17:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.883 17:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.883 17:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.883 17:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70397352 kB' 'MemAvailable: 73862744 kB' 'Buffers: 2704 kB' 'Cached: 15859264 kB' 'SwapCached: 0 kB' 'Active: 13016956 kB' 'Inactive: 3528960 kB' 'Active(anon): 12564644 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687284 kB' 'Mapped: 207384 kB' 'Shmem: 11880696 kB' 'KReclaimable: 275028 kB' 'Slab: 906472 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 631444 kB' 'KernelStack: 22656 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13984440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219588 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # continue 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.883 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.883 17:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.143 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.143 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.143 17:14:14 -- setup/common.sh@33 -- # echo 1024 00:03:36.143 17:14:14 -- setup/common.sh@33 -- # return 0 00:03:36.143 17:14:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.143 17:14:14 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.143 17:14:14 -- setup/hugepages.sh@27 -- # local node 00:03:36.143 17:14:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.143 17:14:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.143 17:14:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.144 17:14:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.144 17:14:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.144 17:14:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.144 17:14:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.144 17:14:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.144 17:14:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.144 17:14:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.144 17:14:14 -- setup/common.sh@18 -- # local node=0 00:03:36.144 17:14:14 -- setup/common.sh@19 -- # local var val 00:03:36.144 17:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.144 17:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.144 17:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.144 17:14:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.144 17:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.144 17:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40801568 kB' 'MemUsed: 7266828 kB' 'SwapCached: 0 kB' 'Active: 4984892 kB' 'Inactive: 228300 kB' 'Active(anon): 4858372 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5065056 kB' 'Mapped: 40824 kB' 'AnonPages: 151344 kB' 'Shmem: 4710236 kB' 'KernelStack: 11960 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124336 kB' 'Slab: 401984 kB' 'SReclaimable: 124336 kB' 'SUnreclaim: 277648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.144 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.144 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@33 -- # echo 0 00:03:36.145 17:14:14 -- setup/common.sh@33 -- # return 0 00:03:36.145 17:14:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.145 17:14:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.145 17:14:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.145 17:14:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.145 17:14:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.145 17:14:14 -- setup/common.sh@18 -- # local node=1 00:03:36.145 17:14:14 -- setup/common.sh@19 -- # local var val 00:03:36.145 17:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.145 17:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.145 17:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.145 17:14:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.145 17:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.145 17:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29595400 kB' 'MemUsed: 14622808 kB' 'SwapCached: 0 kB' 'Active: 8032196 kB' 'Inactive: 3300660 kB' 'Active(anon): 7706404 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3300660 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10796936 kB' 'Mapped: 166560 kB' 'AnonPages: 536044 kB' 'Shmem: 7170484 kB' 'KernelStack: 10728 kB' 'PageTables: 6040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150692 kB' 'Slab: 504488 kB' 'SReclaimable: 150692 kB' 'SUnreclaim: 353796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.145 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.145 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.146 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.146 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.146 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.146 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.146 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.146 17:14:14 -- setup/common.sh@32 -- # continue 00:03:36.146 17:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.146 17:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.146 17:14:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.146 17:14:14 -- setup/common.sh@33 -- # echo 0 00:03:36.146 17:14:14 -- setup/common.sh@33 -- # return 0 00:03:36.146 17:14:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.146 17:14:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.146 17:14:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.146 17:14:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.146 17:14:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.146 node0=512 expecting 512 00:03:36.146 17:14:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.146 17:14:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.146 17:14:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.146 17:14:14 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:36.146 node1=512 expecting 512 00:03:36.146 17:14:14 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:36.146 00:03:36.146 real 0m3.185s 00:03:36.146 user 0m1.321s 00:03:36.146 sys 0m1.909s 00:03:36.146 17:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.146 17:14:14 -- common/autotest_common.sh@10 -- # set +x 00:03:36.146 ************************************ 00:03:36.146 END TEST even_2G_alloc 00:03:36.146 ************************************ 00:03:36.146 17:14:14 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:36.146 17:14:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:36.146 17:14:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:36.146 17:14:14 -- common/autotest_common.sh@10 -- # set +x 00:03:36.146 ************************************ 00:03:36.146 START TEST odd_alloc 00:03:36.146 ************************************ 00:03:36.146 17:14:14 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:36.146 17:14:14 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:36.146 17:14:14 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:36.146 17:14:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.146 17:14:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.146 17:14:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:36.146 17:14:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.146 17:14:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.146 17:14:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.146 17:14:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:36.146 17:14:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.146 17:14:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.146 17:14:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.146 17:14:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.146 17:14:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.146 17:14:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.146 17:14:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.146 17:14:14 -- setup/hugepages.sh@83 -- # : 513 00:03:36.146 17:14:14 -- setup/hugepages.sh@84 -- # : 1 00:03:36.146 17:14:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.146 17:14:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:36.146 17:14:14 -- setup/hugepages.sh@83 -- # : 0 00:03:36.146 17:14:14 -- setup/hugepages.sh@84 -- # : 0 00:03:36.146 17:14:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.146 17:14:14 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:36.146 17:14:14 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:36.146 17:14:14 -- setup/hugepages.sh@160 -- # setup output 00:03:36.146 17:14:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.146 17:14:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.501 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.501 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.501 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.501 17:14:17 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:39.501 17:14:17 -- setup/hugepages.sh@89 -- # local node 00:03:39.501 17:14:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.501 17:14:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.501 17:14:17 -- setup/hugepages.sh@92 -- # local surp 00:03:39.501 17:14:17 -- setup/hugepages.sh@93 -- # local resv 00:03:39.501 17:14:17 -- setup/hugepages.sh@94 -- # local anon 00:03:39.501 17:14:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.501 17:14:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.501 17:14:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.501 17:14:17 -- setup/common.sh@18 -- # local node= 00:03:39.501 17:14:17 -- setup/common.sh@19 -- # local var val 00:03:39.501 17:14:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.501 17:14:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.501 17:14:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.501 17:14:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.501 17:14:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.501 17:14:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70397364 kB' 'MemAvailable: 73862756 kB' 'Buffers: 2704 kB' 'Cached: 15859368 kB' 'SwapCached: 0 kB' 'Active: 13017908 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565596 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688044 kB' 'Mapped: 207400 kB' 'Shmem: 11880800 kB' 'KReclaimable: 275028 kB' 'Slab: 906204 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 631176 kB' 'KernelStack: 22640 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 13985172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219636 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.501 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.501 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.502 17:14:17 -- setup/common.sh@33 -- # echo 0 00:03:39.502 17:14:17 -- setup/common.sh@33 -- # return 0 00:03:39.502 17:14:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:39.502 17:14:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.502 17:14:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.502 17:14:17 -- setup/common.sh@18 -- # local node= 00:03:39.502 17:14:17 -- setup/common.sh@19 -- # local var val 00:03:39.502 17:14:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.502 17:14:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.502 17:14:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.502 17:14:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.502 17:14:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.502 17:14:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70398592 kB' 'MemAvailable: 73863984 kB' 'Buffers: 2704 kB' 'Cached: 15859372 kB' 'SwapCached: 0 kB' 'Active: 13017812 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565500 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688012 kB' 'Mapped: 207392 kB' 'Shmem: 11880804 kB' 'KReclaimable: 275028 kB' 'Slab: 906248 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 631220 kB' 'KernelStack: 22672 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 13985184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219588 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.502 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.502 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.503 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.503 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.504 17:14:17 -- setup/common.sh@33 -- # echo 0 00:03:39.504 17:14:17 -- setup/common.sh@33 -- # return 0 00:03:39.504 17:14:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:39.504 17:14:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.504 17:14:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.504 17:14:17 -- setup/common.sh@18 -- # local node= 00:03:39.504 17:14:17 -- setup/common.sh@19 -- # local var val 00:03:39.504 17:14:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.504 17:14:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.504 17:14:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.504 17:14:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.504 17:14:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.504 17:14:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70399096 kB' 'MemAvailable: 73864488 kB' 'Buffers: 2704 kB' 'Cached: 15859372 kB' 'SwapCached: 0 kB' 'Active: 13017812 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565500 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688012 kB' 'Mapped: 207392 kB' 'Shmem: 11880804 kB' 'KReclaimable: 275028 kB' 'Slab: 906248 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 631220 kB' 'KernelStack: 22672 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 13985200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219588 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:17 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.504 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.504 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.505 17:14:18 -- setup/common.sh@33 -- # echo 0 00:03:39.505 17:14:18 -- setup/common.sh@33 -- # return 0 00:03:39.505 17:14:18 -- setup/hugepages.sh@100 -- # resv=0 00:03:39.505 17:14:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:39.505 nr_hugepages=1025 00:03:39.505 17:14:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.505 resv_hugepages=0 00:03:39.505 17:14:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.505 surplus_hugepages=0 00:03:39.505 17:14:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.505 anon_hugepages=0 00:03:39.505 17:14:18 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:39.505 17:14:18 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:39.505 17:14:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.505 17:14:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.505 17:14:18 -- setup/common.sh@18 -- # local node= 00:03:39.505 17:14:18 -- setup/common.sh@19 -- # local var val 00:03:39.505 17:14:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.505 17:14:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.505 17:14:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.505 17:14:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.505 17:14:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.505 17:14:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.505 17:14:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70398740 kB' 'MemAvailable: 73864132 kB' 'Buffers: 2704 kB' 'Cached: 15859408 kB' 'SwapCached: 0 kB' 'Active: 13017492 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565180 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687624 kB' 'Mapped: 207392 kB' 'Shmem: 11880840 kB' 'KReclaimable: 275028 kB' 'Slab: 906248 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 631220 kB' 'KernelStack: 22656 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 13985212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219588 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.505 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.505 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.506 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.506 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.506 17:14:18 -- setup/common.sh@33 -- # echo 1025 00:03:39.506 17:14:18 -- setup/common.sh@33 -- # return 0 00:03:39.506 17:14:18 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:39.506 17:14:18 -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.506 17:14:18 -- setup/hugepages.sh@27 -- # local node 00:03:39.506 17:14:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.506 17:14:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.506 17:14:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.506 17:14:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:39.506 17:14:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.506 17:14:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.506 17:14:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.506 17:14:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.506 17:14:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.506 17:14:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.506 17:14:18 -- setup/common.sh@18 -- # local node=0 00:03:39.506 17:14:18 -- setup/common.sh@19 -- # local var val 00:03:39.507 17:14:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.507 17:14:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.507 17:14:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.507 17:14:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.507 17:14:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.507 17:14:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40805032 kB' 'MemUsed: 7263364 kB' 'SwapCached: 0 kB' 'Active: 4985584 kB' 'Inactive: 228300 kB' 'Active(anon): 4859064 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5065140 kB' 'Mapped: 40824 kB' 'AnonPages: 151920 kB' 'Shmem: 4710320 kB' 'KernelStack: 11976 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124336 kB' 'Slab: 401864 kB' 'SReclaimable: 124336 kB' 'SUnreclaim: 277528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.507 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.507 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.507 17:14:18 -- setup/common.sh@33 -- # echo 0 00:03:39.507 17:14:18 -- setup/common.sh@33 -- # return 0 00:03:39.507 17:14:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.507 17:14:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.507 17:14:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.507 17:14:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.507 17:14:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.507 17:14:18 -- setup/common.sh@18 -- # local node=1 00:03:39.507 17:14:18 -- setup/common.sh@19 -- # local var val 00:03:39.507 17:14:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.508 17:14:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.508 17:14:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.508 17:14:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.508 17:14:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.508 17:14:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29593184 kB' 'MemUsed: 14625024 kB' 'SwapCached: 0 kB' 'Active: 8032176 kB' 'Inactive: 3300660 kB' 'Active(anon): 7706384 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3300660 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10796976 kB' 'Mapped: 166568 kB' 'AnonPages: 535964 kB' 'Shmem: 7170524 kB' 'KernelStack: 10680 kB' 'PageTables: 5792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150692 kB' 'Slab: 504384 kB' 'SReclaimable: 150692 kB' 'SUnreclaim: 353692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # continue 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.508 17:14:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.508 17:14:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.508 17:14:18 -- setup/common.sh@33 -- # echo 0 00:03:39.508 17:14:18 -- setup/common.sh@33 -- # return 0 00:03:39.508 17:14:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.508 17:14:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.508 17:14:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.509 17:14:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.509 17:14:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:39.509 node0=512 expecting 513 00:03:39.509 17:14:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.509 17:14:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.509 17:14:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.509 17:14:18 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:39.509 node1=513 expecting 512 00:03:39.509 17:14:18 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:39.509 00:03:39.509 real 0m3.164s 00:03:39.509 user 0m1.228s 00:03:39.509 sys 0m1.981s 00:03:39.509 17:14:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.509 17:14:18 -- common/autotest_common.sh@10 -- # set +x 00:03:39.509 ************************************ 00:03:39.509 END TEST odd_alloc 00:03:39.509 ************************************ 00:03:39.509 17:14:18 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:39.509 17:14:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.509 17:14:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.509 17:14:18 -- common/autotest_common.sh@10 -- # set +x 00:03:39.509 ************************************ 00:03:39.509 START TEST custom_alloc 00:03:39.509 ************************************ 00:03:39.509 17:14:18 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:39.509 17:14:18 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:39.509 17:14:18 -- setup/hugepages.sh@169 -- # local node 00:03:39.509 17:14:18 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:39.509 17:14:18 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:39.509 17:14:18 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:39.509 17:14:18 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:39.509 17:14:18 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.509 17:14:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.509 17:14:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.509 17:14:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.509 17:14:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.509 17:14:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.509 17:14:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.509 17:14:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.509 17:14:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.509 17:14:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:39.509 17:14:18 -- setup/hugepages.sh@83 -- # : 256 00:03:39.509 17:14:18 -- setup/hugepages.sh@84 -- # : 1 00:03:39.509 17:14:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:39.509 17:14:18 -- setup/hugepages.sh@83 -- # : 0 00:03:39.509 17:14:18 -- setup/hugepages.sh@84 -- # : 0 00:03:39.509 17:14:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:39.509 17:14:18 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:39.509 17:14:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.509 17:14:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.509 17:14:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.509 17:14:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.509 17:14:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.509 17:14:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.509 17:14:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.509 17:14:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.509 17:14:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.509 17:14:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.509 17:14:18 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:39.509 17:14:18 -- setup/hugepages.sh@78 -- # return 0 00:03:39.509 17:14:18 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:39.509 17:14:18 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:39.509 17:14:18 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:39.509 17:14:18 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:39.509 17:14:18 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:39.509 17:14:18 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:39.509 17:14:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.509 17:14:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.509 17:14:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.509 17:14:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.509 17:14:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.509 17:14:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.509 17:14:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:39.509 17:14:18 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.509 17:14:18 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:39.509 17:14:18 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.509 17:14:18 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:39.509 17:14:18 -- setup/hugepages.sh@78 -- # return 0 00:03:39.509 17:14:18 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:39.509 17:14:18 -- setup/hugepages.sh@187 -- # setup output 00:03:39.509 17:14:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.509 17:14:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.044 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:42.044 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.044 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.307 17:14:21 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:42.307 17:14:21 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:42.307 17:14:21 -- setup/hugepages.sh@89 -- # local node 00:03:42.307 17:14:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.307 17:14:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.307 17:14:21 -- setup/hugepages.sh@92 -- # local surp 00:03:42.307 17:14:21 -- setup/hugepages.sh@93 -- # local resv 00:03:42.307 17:14:21 -- setup/hugepages.sh@94 -- # local anon 00:03:42.307 17:14:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.307 17:14:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.307 17:14:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.307 17:14:21 -- setup/common.sh@18 -- # local node= 00:03:42.307 17:14:21 -- setup/common.sh@19 -- # local var val 00:03:42.307 17:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.307 17:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.307 17:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.307 17:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.307 17:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.307 17:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 69377700 kB' 'MemAvailable: 72843092 kB' 'Buffers: 2704 kB' 'Cached: 15859496 kB' 'SwapCached: 0 kB' 'Active: 13018144 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565832 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688268 kB' 'Mapped: 207396 kB' 'Shmem: 11880928 kB' 'KReclaimable: 275028 kB' 'Slab: 905768 kB' 'SReclaimable: 275028 kB' 'SUnreclaim: 630740 kB' 'KernelStack: 22704 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 13985320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219636 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.307 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.307 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.308 17:14:21 -- setup/common.sh@33 -- # echo 0 00:03:42.308 17:14:21 -- setup/common.sh@33 -- # return 0 00:03:42.308 17:14:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.308 17:14:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.308 17:14:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.308 17:14:21 -- setup/common.sh@18 -- # local node= 00:03:42.308 17:14:21 -- setup/common.sh@19 -- # local var val 00:03:42.308 17:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.308 17:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.308 17:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.308 17:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.308 17:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.308 17:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 69380468 kB' 'MemAvailable: 72845856 kB' 'Buffers: 2704 kB' 'Cached: 15859500 kB' 'SwapCached: 0 kB' 'Active: 13017728 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565416 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687912 kB' 'Mapped: 207396 kB' 'Shmem: 11880932 kB' 'KReclaimable: 275020 kB' 'Slab: 905716 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 630696 kB' 'KernelStack: 22672 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 14000472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219604 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.308 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.308 17:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.309 17:14:21 -- setup/common.sh@33 -- # echo 0 00:03:42.309 17:14:21 -- setup/common.sh@33 -- # return 0 00:03:42.309 17:14:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.309 17:14:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.309 17:14:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.309 17:14:21 -- setup/common.sh@18 -- # local node= 00:03:42.309 17:14:21 -- setup/common.sh@19 -- # local var val 00:03:42.309 17:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.309 17:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.309 17:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.309 17:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.309 17:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.309 17:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 69380248 kB' 'MemAvailable: 72845636 kB' 'Buffers: 2704 kB' 'Cached: 15859500 kB' 'SwapCached: 0 kB' 'Active: 13017548 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565236 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687768 kB' 'Mapped: 207396 kB' 'Shmem: 11880932 kB' 'KReclaimable: 275020 kB' 'Slab: 905768 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 630748 kB' 'KernelStack: 22672 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 13985852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219604 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.309 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.309 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.310 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.310 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.310 17:14:21 -- setup/common.sh@33 -- # echo 0 00:03:42.310 17:14:21 -- setup/common.sh@33 -- # return 0 00:03:42.310 17:14:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.310 17:14:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:42.310 nr_hugepages=1536 00:03:42.310 17:14:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.310 resv_hugepages=0 00:03:42.310 17:14:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.310 surplus_hugepages=0 00:03:42.310 17:14:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.310 anon_hugepages=0 00:03:42.310 17:14:21 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:42.310 17:14:21 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:42.310 17:14:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.311 17:14:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.311 17:14:21 -- setup/common.sh@18 -- # local node= 00:03:42.311 17:14:21 -- setup/common.sh@19 -- # local var val 00:03:42.311 17:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.311 17:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.311 17:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.311 17:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.311 17:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.311 17:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.311 17:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 69380912 kB' 'MemAvailable: 72846300 kB' 'Buffers: 2704 kB' 'Cached: 15859540 kB' 'SwapCached: 0 kB' 'Active: 13017544 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565232 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687692 kB' 'Mapped: 207396 kB' 'Shmem: 11880972 kB' 'KReclaimable: 275020 kB' 'Slab: 905768 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 630748 kB' 'KernelStack: 22656 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 13985864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219604 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.311 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.311 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.312 17:14:21 -- setup/common.sh@33 -- # echo 1536 00:03:42.312 17:14:21 -- setup/common.sh@33 -- # return 0 00:03:42.312 17:14:21 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:42.312 17:14:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.312 17:14:21 -- setup/hugepages.sh@27 -- # local node 00:03:42.312 17:14:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.312 17:14:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.312 17:14:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.312 17:14:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.312 17:14:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.312 17:14:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.312 17:14:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.312 17:14:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.312 17:14:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.312 17:14:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.312 17:14:21 -- setup/common.sh@18 -- # local node=0 00:03:42.312 17:14:21 -- setup/common.sh@19 -- # local var val 00:03:42.312 17:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.312 17:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.312 17:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.312 17:14:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.312 17:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.312 17:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40818684 kB' 'MemUsed: 7249712 kB' 'SwapCached: 0 kB' 'Active: 4985104 kB' 'Inactive: 228300 kB' 'Active(anon): 4858584 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5065180 kB' 'Mapped: 40824 kB' 'AnonPages: 151428 kB' 'Shmem: 4710360 kB' 'KernelStack: 11960 kB' 'PageTables: 3124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124336 kB' 'Slab: 401560 kB' 'SReclaimable: 124336 kB' 'SUnreclaim: 277224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.312 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.312 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@33 -- # echo 0 00:03:42.313 17:14:21 -- setup/common.sh@33 -- # return 0 00:03:42.313 17:14:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.313 17:14:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.313 17:14:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.313 17:14:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.313 17:14:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.313 17:14:21 -- setup/common.sh@18 -- # local node=1 00:03:42.313 17:14:21 -- setup/common.sh@19 -- # local var val 00:03:42.313 17:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.313 17:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.313 17:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.313 17:14:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.313 17:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.313 17:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 28561888 kB' 'MemUsed: 15656320 kB' 'SwapCached: 0 kB' 'Active: 8032716 kB' 'Inactive: 3300660 kB' 'Active(anon): 7706924 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3300660 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10797080 kB' 'Mapped: 166572 kB' 'AnonPages: 536524 kB' 'Shmem: 7170628 kB' 'KernelStack: 10696 kB' 'PageTables: 5848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150684 kB' 'Slab: 504208 kB' 'SReclaimable: 150684 kB' 'SUnreclaim: 353524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.313 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.313 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # continue 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.314 17:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.314 17:14:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.314 17:14:21 -- setup/common.sh@33 -- # echo 0 00:03:42.314 17:14:21 -- setup/common.sh@33 -- # return 0 00:03:42.314 17:14:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.314 17:14:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.314 17:14:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.314 17:14:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.314 17:14:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.314 node0=512 expecting 512 00:03:42.314 17:14:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.314 17:14:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.314 17:14:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.314 17:14:21 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:42.314 node1=1024 expecting 1024 00:03:42.314 17:14:21 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:42.314 00:03:42.314 real 0m3.084s 00:03:42.314 user 0m1.213s 00:03:42.314 sys 0m1.910s 00:03:42.314 17:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.314 17:14:21 -- common/autotest_common.sh@10 -- # set +x 00:03:42.314 ************************************ 00:03:42.314 END TEST custom_alloc 00:03:42.314 ************************************ 00:03:42.314 17:14:21 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:42.314 17:14:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.314 17:14:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.314 17:14:21 -- common/autotest_common.sh@10 -- # set +x 00:03:42.572 ************************************ 00:03:42.572 START TEST no_shrink_alloc 00:03:42.572 ************************************ 00:03:42.572 17:14:21 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:42.572 17:14:21 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:42.573 17:14:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.573 17:14:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.573 17:14:21 -- setup/hugepages.sh@51 -- # shift 00:03:42.573 17:14:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.573 17:14:21 -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.573 17:14:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.573 17:14:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.573 17:14:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.573 17:14:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.573 17:14:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.573 17:14:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.573 17:14:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.573 17:14:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.573 17:14:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.573 17:14:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.573 17:14:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.573 17:14:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.573 17:14:21 -- setup/hugepages.sh@73 -- # return 0 00:03:42.573 17:14:21 -- setup/hugepages.sh@198 -- # setup output 00:03:42.573 17:14:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.573 17:14:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.105 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:45.105 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.105 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.105 17:14:23 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:45.105 17:14:23 -- setup/hugepages.sh@89 -- # local node 00:03:45.105 17:14:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.105 17:14:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.105 17:14:23 -- setup/hugepages.sh@92 -- # local surp 00:03:45.105 17:14:23 -- setup/hugepages.sh@93 -- # local resv 00:03:45.105 17:14:23 -- setup/hugepages.sh@94 -- # local anon 00:03:45.105 17:14:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.105 17:14:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.105 17:14:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.105 17:14:24 -- setup/common.sh@18 -- # local node= 00:03:45.105 17:14:24 -- setup/common.sh@19 -- # local var val 00:03:45.105 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.105 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.105 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.105 17:14:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.105 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.105 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70421580 kB' 'MemAvailable: 73886968 kB' 'Buffers: 2704 kB' 'Cached: 15859616 kB' 'SwapCached: 0 kB' 'Active: 13019424 kB' 'Inactive: 3528960 kB' 'Active(anon): 12567112 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688748 kB' 'Mapped: 207632 kB' 'Shmem: 11881048 kB' 'KReclaimable: 275020 kB' 'Slab: 906260 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 631240 kB' 'KernelStack: 22672 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13986204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219668 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.105 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.105 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.106 17:14:24 -- setup/common.sh@33 -- # echo 0 00:03:45.106 17:14:24 -- setup/common.sh@33 -- # return 0 00:03:45.106 17:14:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:45.106 17:14:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.106 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.106 17:14:24 -- setup/common.sh@18 -- # local node= 00:03:45.106 17:14:24 -- setup/common.sh@19 -- # local var val 00:03:45.106 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.106 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.106 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.106 17:14:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.106 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.106 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70421016 kB' 'MemAvailable: 73886404 kB' 'Buffers: 2704 kB' 'Cached: 15859620 kB' 'SwapCached: 0 kB' 'Active: 13018484 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566172 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688304 kB' 'Mapped: 207552 kB' 'Shmem: 11881052 kB' 'KReclaimable: 275020 kB' 'Slab: 906228 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 631208 kB' 'KernelStack: 22656 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13986216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219652 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.106 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.106 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.107 17:14:24 -- setup/common.sh@33 -- # echo 0 00:03:45.107 17:14:24 -- setup/common.sh@33 -- # return 0 00:03:45.107 17:14:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:45.107 17:14:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.107 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.107 17:14:24 -- setup/common.sh@18 -- # local node= 00:03:45.107 17:14:24 -- setup/common.sh@19 -- # local var val 00:03:45.107 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.107 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.107 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.107 17:14:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.107 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.107 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70421016 kB' 'MemAvailable: 73886404 kB' 'Buffers: 2704 kB' 'Cached: 15859624 kB' 'SwapCached: 0 kB' 'Active: 13018176 kB' 'Inactive: 3528960 kB' 'Active(anon): 12565864 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687992 kB' 'Mapped: 207552 kB' 'Shmem: 11881056 kB' 'KReclaimable: 275020 kB' 'Slab: 906228 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 631208 kB' 'KernelStack: 22656 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13986232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219652 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.107 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.107 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.367 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.367 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.368 17:14:24 -- setup/common.sh@33 -- # echo 0 00:03:45.368 17:14:24 -- setup/common.sh@33 -- # return 0 00:03:45.368 17:14:24 -- setup/hugepages.sh@100 -- # resv=0 00:03:45.368 17:14:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.368 nr_hugepages=1024 00:03:45.368 17:14:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.368 resv_hugepages=0 00:03:45.368 17:14:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.368 surplus_hugepages=0 00:03:45.368 17:14:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.368 anon_hugepages=0 00:03:45.368 17:14:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.368 17:14:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.368 17:14:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.368 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.368 17:14:24 -- setup/common.sh@18 -- # local node= 00:03:45.368 17:14:24 -- setup/common.sh@19 -- # local var val 00:03:45.368 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.368 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.368 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.368 17:14:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.368 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.368 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70421016 kB' 'MemAvailable: 73886404 kB' 'Buffers: 2704 kB' 'Cached: 15859644 kB' 'SwapCached: 0 kB' 'Active: 13018504 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566192 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688304 kB' 'Mapped: 207552 kB' 'Shmem: 11881076 kB' 'KReclaimable: 275020 kB' 'Slab: 906228 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 631208 kB' 'KernelStack: 22656 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13986244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219652 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.368 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.368 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.369 17:14:24 -- setup/common.sh@33 -- # echo 1024 00:03:45.369 17:14:24 -- setup/common.sh@33 -- # return 0 00:03:45.369 17:14:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.369 17:14:24 -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.369 17:14:24 -- setup/hugepages.sh@27 -- # local node 00:03:45.369 17:14:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.369 17:14:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.369 17:14:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.369 17:14:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.369 17:14:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.369 17:14:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.369 17:14:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.369 17:14:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.369 17:14:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.369 17:14:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.369 17:14:24 -- setup/common.sh@18 -- # local node=0 00:03:45.369 17:14:24 -- setup/common.sh@19 -- # local var val 00:03:45.369 17:14:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.369 17:14:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.369 17:14:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.369 17:14:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.369 17:14:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.369 17:14:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 39766888 kB' 'MemUsed: 8301508 kB' 'SwapCached: 0 kB' 'Active: 4985232 kB' 'Inactive: 228300 kB' 'Active(anon): 4858712 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5065184 kB' 'Mapped: 40824 kB' 'AnonPages: 151460 kB' 'Shmem: 4710364 kB' 'KernelStack: 11944 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124336 kB' 'Slab: 401752 kB' 'SReclaimable: 124336 kB' 'SUnreclaim: 277416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # continue 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.369 17:14:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.369 17:14:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.369 17:14:24 -- setup/common.sh@33 -- # echo 0 00:03:45.369 17:14:24 -- setup/common.sh@33 -- # return 0 00:03:45.369 17:14:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.369 17:14:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.369 17:14:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.369 17:14:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.369 17:14:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.369 node0=1024 expecting 1024 00:03:45.369 17:14:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.369 17:14:24 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:45.369 17:14:24 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:45.369 17:14:24 -- setup/hugepages.sh@202 -- # setup output 00:03:45.369 17:14:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.369 17:14:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.900 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.900 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.900 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.161 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.161 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:48.161 17:14:26 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:48.161 17:14:26 -- setup/hugepages.sh@89 -- # local node 00:03:48.161 17:14:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.161 17:14:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.161 17:14:26 -- setup/hugepages.sh@92 -- # local surp 00:03:48.161 17:14:26 -- setup/hugepages.sh@93 -- # local resv 00:03:48.161 17:14:26 -- setup/hugepages.sh@94 -- # local anon 00:03:48.161 17:14:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.161 17:14:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.161 17:14:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.161 17:14:26 -- setup/common.sh@18 -- # local node= 00:03:48.161 17:14:26 -- setup/common.sh@19 -- # local var val 00:03:48.161 17:14:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.161 17:14:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.161 17:14:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.161 17:14:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.161 17:14:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.161 17:14:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70423512 kB' 'MemAvailable: 73888900 kB' 'Buffers: 2704 kB' 'Cached: 15859724 kB' 'SwapCached: 0 kB' 'Active: 13018760 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566448 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688508 kB' 'Mapped: 207412 kB' 'Shmem: 11881156 kB' 'KReclaimable: 275020 kB' 'Slab: 906012 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 630992 kB' 'KernelStack: 22592 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13986336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219524 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.161 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.161 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:26 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.162 17:14:27 -- setup/common.sh@33 -- # echo 0 00:03:48.162 17:14:27 -- setup/common.sh@33 -- # return 0 00:03:48.162 17:14:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.162 17:14:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.162 17:14:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.162 17:14:27 -- setup/common.sh@18 -- # local node= 00:03:48.162 17:14:27 -- setup/common.sh@19 -- # local var val 00:03:48.162 17:14:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.162 17:14:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.162 17:14:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.162 17:14:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.162 17:14:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.162 17:14:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70424848 kB' 'MemAvailable: 73890236 kB' 'Buffers: 2704 kB' 'Cached: 15859728 kB' 'SwapCached: 0 kB' 'Active: 13018688 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566376 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688468 kB' 'Mapped: 207412 kB' 'Shmem: 11881160 kB' 'KReclaimable: 275020 kB' 'Slab: 906036 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 631016 kB' 'KernelStack: 22624 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13986348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219492 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.162 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.162 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.163 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.163 17:14:27 -- setup/common.sh@33 -- # echo 0 00:03:48.163 17:14:27 -- setup/common.sh@33 -- # return 0 00:03:48.163 17:14:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.163 17:14:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.163 17:14:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.163 17:14:27 -- setup/common.sh@18 -- # local node= 00:03:48.163 17:14:27 -- setup/common.sh@19 -- # local var val 00:03:48.163 17:14:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.163 17:14:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.163 17:14:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.163 17:14:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.163 17:14:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.163 17:14:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.163 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70425224 kB' 'MemAvailable: 73890612 kB' 'Buffers: 2704 kB' 'Cached: 15859740 kB' 'SwapCached: 0 kB' 'Active: 13018700 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566388 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688472 kB' 'Mapped: 207412 kB' 'Shmem: 11881172 kB' 'KReclaimable: 275020 kB' 'Slab: 906036 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 631016 kB' 'KernelStack: 22624 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13986500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219492 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.164 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.164 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.165 17:14:27 -- setup/common.sh@33 -- # echo 0 00:03:48.165 17:14:27 -- setup/common.sh@33 -- # return 0 00:03:48.165 17:14:27 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.165 17:14:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.165 nr_hugepages=1024 00:03:48.165 17:14:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.165 resv_hugepages=0 00:03:48.165 17:14:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.165 surplus_hugepages=0 00:03:48.165 17:14:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.165 anon_hugepages=0 00:03:48.165 17:14:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.165 17:14:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.165 17:14:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.165 17:14:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.165 17:14:27 -- setup/common.sh@18 -- # local node= 00:03:48.165 17:14:27 -- setup/common.sh@19 -- # local var val 00:03:48.165 17:14:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.165 17:14:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.165 17:14:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.165 17:14:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.165 17:14:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.165 17:14:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70425624 kB' 'MemAvailable: 73891012 kB' 'Buffers: 2704 kB' 'Cached: 15859752 kB' 'SwapCached: 0 kB' 'Active: 13018752 kB' 'Inactive: 3528960 kB' 'Active(anon): 12566440 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688468 kB' 'Mapped: 207412 kB' 'Shmem: 11881184 kB' 'KReclaimable: 275020 kB' 'Slab: 906036 kB' 'SReclaimable: 275020 kB' 'SUnreclaim: 631016 kB' 'KernelStack: 22624 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13986516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219492 kB' 'VmallocChunk: 0 kB' 'Percpu: 84224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3390420 kB' 'DirectMap2M: 30892032 kB' 'DirectMap1G: 67108864 kB' 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.165 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.165 17:14:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.166 17:14:27 -- setup/common.sh@33 -- # echo 1024 00:03:48.166 17:14:27 -- setup/common.sh@33 -- # return 0 00:03:48.166 17:14:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.166 17:14:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.166 17:14:27 -- setup/hugepages.sh@27 -- # local node 00:03:48.166 17:14:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.166 17:14:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.166 17:14:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.166 17:14:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.166 17:14:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.166 17:14:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.166 17:14:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.166 17:14:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.166 17:14:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.166 17:14:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.166 17:14:27 -- setup/common.sh@18 -- # local node=0 00:03:48.166 17:14:27 -- setup/common.sh@19 -- # local var val 00:03:48.166 17:14:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.166 17:14:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.166 17:14:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.166 17:14:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.166 17:14:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.166 17:14:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 39777980 kB' 'MemUsed: 8290416 kB' 'SwapCached: 0 kB' 'Active: 4985632 kB' 'Inactive: 228300 kB' 'Active(anon): 4859112 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5065212 kB' 'Mapped: 40824 kB' 'AnonPages: 151880 kB' 'Shmem: 4710392 kB' 'KernelStack: 11976 kB' 'PageTables: 3180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124336 kB' 'Slab: 401548 kB' 'SReclaimable: 124336 kB' 'SUnreclaim: 277212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.166 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.166 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.167 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.167 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 17:14:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.424 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 17:14:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.424 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 17:14:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # continue 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 17:14:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 17:14:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 17:14:27 -- setup/common.sh@33 -- # echo 0 00:03:48.425 17:14:27 -- setup/common.sh@33 -- # return 0 00:03:48.425 17:14:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.425 17:14:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.425 17:14:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.425 17:14:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.425 17:14:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.425 node0=1024 expecting 1024 00:03:48.425 17:14:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.425 00:03:48.425 real 0m5.858s 00:03:48.425 user 0m2.190s 00:03:48.425 sys 0m3.737s 00:03:48.425 17:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.425 17:14:27 -- common/autotest_common.sh@10 -- # set +x 00:03:48.425 ************************************ 00:03:48.425 END TEST no_shrink_alloc 00:03:48.425 ************************************ 00:03:48.425 17:14:27 -- setup/hugepages.sh@217 -- # clear_hp 00:03:48.425 17:14:27 -- setup/hugepages.sh@37 -- # local node hp 00:03:48.425 17:14:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.425 17:14:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.425 17:14:27 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.425 17:14:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.425 17:14:27 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.425 17:14:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.425 17:14:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.425 17:14:27 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.425 17:14:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.425 17:14:27 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.425 17:14:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.425 17:14:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.425 00:03:48.425 real 0m23.019s 00:03:48.425 user 0m8.790s 00:03:48.425 sys 0m13.744s 00:03:48.425 17:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.425 17:14:27 -- common/autotest_common.sh@10 -- # set +x 00:03:48.425 ************************************ 00:03:48.425 END TEST hugepages 00:03:48.425 ************************************ 00:03:48.425 17:14:27 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:48.425 17:14:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.425 17:14:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.425 17:14:27 -- common/autotest_common.sh@10 -- # set +x 00:03:48.425 ************************************ 00:03:48.425 START TEST driver 00:03:48.425 ************************************ 00:03:48.425 17:14:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:48.425 * Looking for test storage... 00:03:48.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:48.425 17:14:27 -- setup/driver.sh@68 -- # setup reset 00:03:48.425 17:14:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.425 17:14:27 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.611 17:14:31 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:52.611 17:14:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.611 17:14:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.611 17:14:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.611 ************************************ 00:03:52.611 START TEST guess_driver 00:03:52.611 ************************************ 00:03:52.611 17:14:31 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:52.611 17:14:31 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:52.611 17:14:31 -- setup/driver.sh@47 -- # local fail=0 00:03:52.611 17:14:31 -- setup/driver.sh@49 -- # pick_driver 00:03:52.611 17:14:31 -- setup/driver.sh@36 -- # vfio 00:03:52.611 17:14:31 -- setup/driver.sh@21 -- # local iommu_grups 00:03:52.611 17:14:31 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:52.611 17:14:31 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:52.611 17:14:31 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:52.611 17:14:31 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:52.611 17:14:31 -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:03:52.611 17:14:31 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:52.611 17:14:31 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:52.611 17:14:31 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:52.611 17:14:31 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:52.611 17:14:31 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:52.611 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:52.611 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:52.611 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:52.611 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:52.611 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:52.612 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:52.612 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:52.612 17:14:31 -- setup/driver.sh@30 -- # return 0 00:03:52.612 17:14:31 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:52.612 17:14:31 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:52.612 17:14:31 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:52.612 17:14:31 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:52.612 Looking for driver=vfio-pci 00:03:52.612 17:14:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.612 17:14:31 -- setup/driver.sh@45 -- # setup output config 00:03:52.612 17:14:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.612 17:14:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.899 17:14:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.899 17:14:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.899 17:14:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.467 17:14:35 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.467 17:14:35 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.467 17:14:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.725 17:14:35 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:56.725 17:14:35 -- setup/driver.sh@65 -- # setup reset 00:03:56.725 17:14:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.725 17:14:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.915 00:04:00.915 real 0m8.269s 00:04:00.915 user 0m2.389s 00:04:00.915 sys 0m4.285s 00:04:00.915 17:14:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.915 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:04:00.915 ************************************ 00:04:00.915 END TEST guess_driver 00:04:00.915 ************************************ 00:04:00.915 00:04:00.915 real 0m12.585s 00:04:00.915 user 0m3.546s 00:04:00.915 sys 0m6.619s 00:04:00.915 17:14:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.915 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:04:00.915 ************************************ 00:04:00.915 END TEST driver 00:04:00.915 ************************************ 00:04:00.915 17:14:39 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:00.915 17:14:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.915 17:14:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.915 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:04:00.915 ************************************ 00:04:00.915 START TEST devices 00:04:00.915 ************************************ 00:04:00.915 17:14:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:01.173 * Looking for test storage... 00:04:01.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:01.173 17:14:39 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:01.173 17:14:39 -- setup/devices.sh@192 -- # setup reset 00:04:01.173 17:14:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.173 17:14:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.459 17:14:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:04.459 17:14:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:04.459 17:14:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:04.459 17:14:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:04.459 17:14:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:04.459 17:14:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:04.459 17:14:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:04.459 17:14:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.459 17:14:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:04.459 17:14:43 -- setup/devices.sh@196 -- # blocks=() 00:04:04.459 17:14:43 -- setup/devices.sh@196 -- # declare -a blocks 00:04:04.459 17:14:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:04.459 17:14:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:04.459 17:14:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:04.459 17:14:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:04.459 17:14:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:04.459 17:14:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:04.459 17:14:43 -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:04:04.459 17:14:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:04:04.459 17:14:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:04.459 17:14:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:04.459 17:14:43 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:04.459 No valid GPT data, bailing 00:04:04.459 17:14:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.459 17:14:43 -- scripts/common.sh@393 -- # pt= 00:04:04.459 17:14:43 -- scripts/common.sh@394 -- # return 1 00:04:04.459 17:14:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:04.459 17:14:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:04.459 17:14:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:04.459 17:14:43 -- setup/common.sh@80 -- # echo 1000204886016 00:04:04.459 17:14:43 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:04.459 17:14:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:04.459 17:14:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:04:04.459 17:14:43 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:04.459 17:14:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:04.459 17:14:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:04.459 17:14:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.459 17:14:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.459 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:04:04.459 ************************************ 00:04:04.459 START TEST nvme_mount 00:04:04.459 ************************************ 00:04:04.459 17:14:43 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:04.459 17:14:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:04.459 17:14:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:04.459 17:14:43 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.459 17:14:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.459 17:14:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:04.459 17:14:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:04.459 17:14:43 -- setup/common.sh@40 -- # local part_no=1 00:04:04.459 17:14:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:04.459 17:14:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:04.459 17:14:43 -- setup/common.sh@44 -- # parts=() 00:04:04.459 17:14:43 -- setup/common.sh@44 -- # local parts 00:04:04.459 17:14:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:04.459 17:14:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.459 17:14:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.459 17:14:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:04.459 17:14:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.459 17:14:43 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:04.459 17:14:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:04.459 17:14:43 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:05.395 Creating new GPT entries in memory. 00:04:05.395 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:05.395 other utilities. 00:04:05.395 17:14:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:05.395 17:14:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.395 17:14:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.395 17:14:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.395 17:14:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:06.772 Creating new GPT entries in memory. 00:04:06.772 The operation has completed successfully. 00:04:06.772 17:14:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:06.772 17:14:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.772 17:14:45 -- setup/common.sh@62 -- # wait 3897070 00:04:06.772 17:14:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.772 17:14:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:06.772 17:14:45 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.772 17:14:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:06.772 17:14:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:06.772 17:14:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.772 17:14:45 -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.772 17:14:45 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:06.772 17:14:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:06.772 17:14:45 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.772 17:14:45 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.772 17:14:45 -- setup/devices.sh@53 -- # local found=0 00:04:06.772 17:14:45 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.772 17:14:45 -- setup/devices.sh@56 -- # : 00:04:06.772 17:14:45 -- setup/devices.sh@59 -- # local pci status 00:04:06.772 17:14:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.772 17:14:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:06.772 17:14:45 -- setup/devices.sh@47 -- # setup output config 00:04:06.772 17:14:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.772 17:14:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:09.301 17:14:48 -- setup/devices.sh@63 -- # found=1 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.301 17:14:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.301 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.559 17:14:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.559 17:14:48 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:09.559 17:14:48 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.559 17:14:48 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.559 17:14:48 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.559 17:14:48 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:09.559 17:14:48 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.559 17:14:48 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.559 17:14:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.559 17:14:48 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:09.559 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.559 17:14:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.559 17:14:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.817 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:09.817 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:09.817 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:09.817 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:09.817 17:14:48 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:09.817 17:14:48 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:09.817 17:14:48 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.817 17:14:48 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:09.817 17:14:48 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:09.817 17:14:48 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.104 17:14:48 -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.104 17:14:48 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:10.104 17:14:48 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:10.104 17:14:48 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.104 17:14:48 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.104 17:14:48 -- setup/devices.sh@53 -- # local found=0 00:04:10.104 17:14:48 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.104 17:14:48 -- setup/devices.sh@56 -- # : 00:04:10.104 17:14:48 -- setup/devices.sh@59 -- # local pci status 00:04:10.104 17:14:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.104 17:14:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:10.104 17:14:48 -- setup/devices.sh@47 -- # setup output config 00:04:10.104 17:14:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.104 17:14:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:12.633 17:14:51 -- setup/devices.sh@63 -- # found=1 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.633 17:14:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.633 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.892 17:14:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.892 17:14:51 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:12.892 17:14:51 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.892 17:14:51 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.892 17:14:51 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.892 17:14:51 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.892 17:14:51 -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:04:12.892 17:14:51 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:12.892 17:14:51 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:12.892 17:14:51 -- setup/devices.sh@50 -- # local mount_point= 00:04:12.892 17:14:51 -- setup/devices.sh@51 -- # local test_file= 00:04:12.892 17:14:51 -- setup/devices.sh@53 -- # local found=0 00:04:12.892 17:14:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:12.892 17:14:51 -- setup/devices.sh@59 -- # local pci status 00:04:12.892 17:14:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.892 17:14:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:12.892 17:14:51 -- setup/devices.sh@47 -- # setup output config 00:04:12.892 17:14:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.892 17:14:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:16.181 17:14:54 -- setup/devices.sh@63 -- # found=1 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.181 17:14:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.181 17:14:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:16.181 17:14:54 -- setup/devices.sh@68 -- # return 0 00:04:16.181 17:14:54 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:16.181 17:14:54 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.181 17:14:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.181 17:14:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.181 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.181 00:04:16.181 real 0m11.372s 00:04:16.181 user 0m3.426s 00:04:16.181 sys 0m5.809s 00:04:16.181 17:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.181 17:14:54 -- common/autotest_common.sh@10 -- # set +x 00:04:16.181 ************************************ 00:04:16.181 END TEST nvme_mount 00:04:16.181 ************************************ 00:04:16.181 17:14:54 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:16.181 17:14:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.181 17:14:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.181 17:14:54 -- common/autotest_common.sh@10 -- # set +x 00:04:16.181 ************************************ 00:04:16.181 START TEST dm_mount 00:04:16.181 ************************************ 00:04:16.181 17:14:54 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:16.181 17:14:54 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:16.181 17:14:54 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:16.181 17:14:54 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:16.181 17:14:54 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:16.181 17:14:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.181 17:14:54 -- setup/common.sh@40 -- # local part_no=2 00:04:16.181 17:14:54 -- setup/common.sh@41 -- # local size=1073741824 00:04:16.181 17:14:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.181 17:14:54 -- setup/common.sh@44 -- # parts=() 00:04:16.181 17:14:54 -- setup/common.sh@44 -- # local parts 00:04:16.181 17:14:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.181 17:14:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.181 17:14:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.181 17:14:54 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.181 17:14:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.181 17:14:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.181 17:14:54 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.181 17:14:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.181 17:14:54 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.181 17:14:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.181 17:14:54 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:17.116 Creating new GPT entries in memory. 00:04:17.117 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.117 other utilities. 00:04:17.117 17:14:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.117 17:14:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.117 17:14:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.117 17:14:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.117 17:14:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.089 Creating new GPT entries in memory. 00:04:18.089 The operation has completed successfully. 00:04:18.089 17:14:56 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.089 17:14:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.089 17:14:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.089 17:14:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.089 17:14:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:19.050 The operation has completed successfully. 00:04:19.050 17:14:57 -- setup/common.sh@57 -- # (( part++ )) 00:04:19.050 17:14:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.050 17:14:57 -- setup/common.sh@62 -- # wait 3901343 00:04:19.050 17:14:57 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:19.050 17:14:57 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.050 17:14:57 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.050 17:14:57 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:19.050 17:14:57 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:19.050 17:14:57 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.050 17:14:57 -- setup/devices.sh@161 -- # break 00:04:19.050 17:14:57 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.050 17:14:57 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:19.050 17:14:57 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:19.050 17:14:57 -- setup/devices.sh@166 -- # dm=dm-0 00:04:19.050 17:14:57 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:19.050 17:14:57 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:19.050 17:14:57 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.050 17:14:57 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:19.050 17:14:57 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.050 17:14:57 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.050 17:14:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:19.050 17:14:57 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.050 17:14:57 -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.050 17:14:57 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:19.050 17:14:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:19.050 17:14:57 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.050 17:14:57 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.050 17:14:57 -- setup/devices.sh@53 -- # local found=0 00:04:19.050 17:14:57 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.050 17:14:57 -- setup/devices.sh@56 -- # : 00:04:19.050 17:14:57 -- setup/devices.sh@59 -- # local pci status 00:04:19.050 17:14:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.050 17:14:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:19.050 17:14:57 -- setup/devices.sh@47 -- # setup output config 00:04:19.050 17:14:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.050 17:14:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.337 17:15:00 -- setup/devices.sh@63 -- # found=1 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.337 17:15:00 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:22.337 17:15:00 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.337 17:15:00 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.337 17:15:00 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:22.337 17:15:00 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.337 17:15:00 -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:22.337 17:15:00 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:22.337 17:15:00 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:22.337 17:15:00 -- setup/devices.sh@50 -- # local mount_point= 00:04:22.337 17:15:00 -- setup/devices.sh@51 -- # local test_file= 00:04:22.337 17:15:00 -- setup/devices.sh@53 -- # local found=0 00:04:22.337 17:15:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.337 17:15:00 -- setup/devices.sh@59 -- # local pci status 00:04:22.337 17:15:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.337 17:15:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:22.337 17:15:00 -- setup/devices.sh@47 -- # setup output config 00:04:22.337 17:15:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.337 17:15:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:24.870 17:15:03 -- setup/devices.sh@63 -- # found=1 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:24.870 17:15:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.870 17:15:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.870 17:15:03 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.870 17:15:03 -- setup/devices.sh@68 -- # return 0 00:04:24.870 17:15:03 -- setup/devices.sh@187 -- # cleanup_dm 00:04:24.870 17:15:03 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.871 17:15:03 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.871 17:15:03 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:24.871 17:15:03 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.871 17:15:03 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:24.871 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.871 17:15:03 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.871 17:15:03 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:24.871 00:04:24.871 real 0m9.120s 00:04:24.871 user 0m2.239s 00:04:24.871 sys 0m3.919s 00:04:24.871 17:15:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.871 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:04:24.871 ************************************ 00:04:24.871 END TEST dm_mount 00:04:24.871 ************************************ 00:04:25.129 17:15:03 -- setup/devices.sh@1 -- # cleanup 00:04:25.129 17:15:03 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:25.129 17:15:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.129 17:15:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.129 17:15:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.129 17:15:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.129 17:15:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.388 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:25.388 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:25.388 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.388 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.388 17:15:04 -- setup/devices.sh@12 -- # cleanup_dm 00:04:25.388 17:15:04 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.388 17:15:04 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.388 17:15:04 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.388 17:15:04 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.388 17:15:04 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.388 17:15:04 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:25.388 00:04:25.388 real 0m24.315s 00:04:25.388 user 0m7.067s 00:04:25.388 sys 0m12.032s 00:04:25.388 17:15:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.388 17:15:04 -- common/autotest_common.sh@10 -- # set +x 00:04:25.388 ************************************ 00:04:25.388 END TEST devices 00:04:25.388 ************************************ 00:04:25.388 00:04:25.388 real 1m21.120s 00:04:25.388 user 0m26.525s 00:04:25.388 sys 0m45.121s 00:04:25.388 17:15:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.388 17:15:04 -- common/autotest_common.sh@10 -- # set +x 00:04:25.388 ************************************ 00:04:25.388 END TEST setup.sh 00:04:25.388 ************************************ 00:04:25.388 17:15:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:28.676 Hugepages 00:04:28.676 node hugesize free / total 00:04:28.676 node0 1048576kB 0 / 0 00:04:28.676 node0 2048kB 2048 / 2048 00:04:28.676 node1 1048576kB 0 / 0 00:04:28.676 node1 2048kB 0 / 0 00:04:28.676 00:04:28.676 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.676 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:28.676 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:28.676 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:28.676 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:28.676 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:28.676 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:28.676 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:28.676 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:28.676 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:28.676 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:28.676 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:28.676 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:28.676 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:28.676 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:28.676 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:28.676 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:28.676 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:28.676 17:15:07 -- spdk/autotest.sh@141 -- # uname -s 00:04:28.676 17:15:07 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:28.676 17:15:07 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:28.676 17:15:07 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.208 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.208 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.466 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.466 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.402 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.402 17:15:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:33.337 17:15:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:33.337 17:15:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:33.337 17:15:12 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.337 17:15:12 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:33.337 17:15:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:33.337 17:15:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:33.337 17:15:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.337 17:15:12 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:33.337 17:15:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:33.337 17:15:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:33.337 17:15:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:04:33.337 17:15:12 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.646 Waiting for block devices as requested 00:04:36.646 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:04:36.646 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.646 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.646 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.646 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.646 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.905 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.905 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.905 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.905 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:37.164 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:37.164 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:37.164 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:37.422 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:37.422 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:37.422 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:37.681 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:37.681 17:15:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:37.681 17:15:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:04:37.681 17:15:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.681 17:15:16 -- common/autotest_common.sh@1487 -- # grep 0000:86:00.0/nvme/nvme 00:04:37.681 17:15:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:37.681 17:15:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:04:37.681 17:15:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:37.681 17:15:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:37.681 17:15:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:37.681 17:15:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:37.681 17:15:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:37.681 17:15:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:37.681 17:15:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:37.681 17:15:16 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:04:37.681 17:15:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:37.681 17:15:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:37.681 17:15:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:37.681 17:15:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:37.681 17:15:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:37.681 17:15:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:37.681 17:15:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:37.681 17:15:16 -- common/autotest_common.sh@1542 -- # continue 00:04:37.681 17:15:16 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:37.681 17:15:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:37.681 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:04:37.681 17:15:16 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:37.681 17:15:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:37.681 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:04:37.681 17:15:16 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.967 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.967 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:41.534 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.793 17:15:20 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:41.793 17:15:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:41.793 17:15:20 -- common/autotest_common.sh@10 -- # set +x 00:04:41.793 17:15:20 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:41.793 17:15:20 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:41.793 17:15:20 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.793 17:15:20 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:41.793 17:15:20 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:41.793 17:15:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:41.793 17:15:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:41.793 17:15:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:41.793 17:15:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.793 17:15:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.793 17:15:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:41.793 17:15:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:41.793 17:15:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:04:41.793 17:15:20 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:41.793 17:15:20 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:04:41.793 17:15:20 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:41.793 17:15:20 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:41.793 17:15:20 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:41.793 17:15:20 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:86:00.0 00:04:41.793 17:15:20 -- common/autotest_common.sh@1577 -- # [[ -z 0000:86:00.0 ]] 00:04:41.793 17:15:20 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3911565 00:04:41.793 17:15:20 -- common/autotest_common.sh@1583 -- # waitforlisten 3911565 00:04:41.793 17:15:20 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.793 17:15:20 -- common/autotest_common.sh@819 -- # '[' -z 3911565 ']' 00:04:41.793 17:15:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.793 17:15:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:41.793 17:15:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.793 17:15:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:41.793 17:15:20 -- common/autotest_common.sh@10 -- # set +x 00:04:42.051 [2024-07-12 17:15:20.770732] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:42.051 [2024-07-12 17:15:20.770796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911565 ] 00:04:42.051 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.051 [2024-07-12 17:15:20.852787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.051 [2024-07-12 17:15:20.895416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.051 [2024-07-12 17:15:20.895573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.986 17:15:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:42.986 17:15:21 -- common/autotest_common.sh@852 -- # return 0 00:04:42.986 17:15:21 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:42.986 17:15:21 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:42.986 17:15:21 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:04:46.269 nvme0n1 00:04:46.269 17:15:24 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:46.269 [2024-07-12 17:15:24.918321] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:46.269 request: 00:04:46.269 { 00:04:46.269 "nvme_ctrlr_name": "nvme0", 00:04:46.269 "password": "test", 00:04:46.269 "method": "bdev_nvme_opal_revert", 00:04:46.269 "req_id": 1 00:04:46.269 } 00:04:46.269 Got JSON-RPC error response 00:04:46.269 response: 00:04:46.269 { 00:04:46.269 "code": -32602, 00:04:46.269 "message": "Invalid parameters" 00:04:46.269 } 00:04:46.269 17:15:24 -- common/autotest_common.sh@1589 -- # true 00:04:46.269 17:15:24 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:46.269 17:15:24 -- common/autotest_common.sh@1593 -- # killprocess 3911565 00:04:46.269 17:15:24 -- common/autotest_common.sh@926 -- # '[' -z 3911565 ']' 00:04:46.269 17:15:24 -- common/autotest_common.sh@930 -- # kill -0 3911565 00:04:46.269 17:15:24 -- common/autotest_common.sh@931 -- # uname 00:04:46.269 17:15:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:46.269 17:15:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3911565 00:04:46.269 17:15:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:46.269 17:15:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:46.269 17:15:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3911565' 00:04:46.269 killing process with pid 3911565 00:04:46.269 17:15:24 -- common/autotest_common.sh@945 -- # kill 3911565 00:04:46.269 17:15:24 -- common/autotest_common.sh@950 -- # wait 3911565 00:04:48.169 17:15:26 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:48.169 17:15:26 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:48.169 17:15:26 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:48.169 17:15:26 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:48.169 17:15:26 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:48.169 17:15:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.169 17:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:48.169 17:15:26 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.170 17:15:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.170 17:15:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.170 17:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:48.170 ************************************ 00:04:48.170 START TEST env 00:04:48.170 ************************************ 00:04:48.170 17:15:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.170 * Looking for test storage... 00:04:48.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:48.170 17:15:26 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.170 17:15:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.170 17:15:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.170 17:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:48.170 ************************************ 00:04:48.170 START TEST env_memory 00:04:48.170 ************************************ 00:04:48.170 17:15:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.170 00:04:48.170 00:04:48.170 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.170 http://cunit.sourceforge.net/ 00:04:48.170 00:04:48.170 00:04:48.170 Suite: memory 00:04:48.170 Test: alloc and free memory map ...[2024-07-12 17:15:26.831445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:48.170 passed 00:04:48.170 Test: mem map translation ...[2024-07-12 17:15:26.860718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:48.170 [2024-07-12 17:15:26.860737] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:48.170 [2024-07-12 17:15:26.860791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:48.170 [2024-07-12 17:15:26.860804] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:48.170 passed 00:04:48.170 Test: mem map registration ...[2024-07-12 17:15:26.921026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:48.170 [2024-07-12 17:15:26.921044] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:48.170 passed 00:04:48.170 Test: mem map adjacent registrations ...passed 00:04:48.170 00:04:48.170 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.170 suites 1 1 n/a 0 0 00:04:48.170 tests 4 4 4 0 0 00:04:48.170 asserts 152 152 152 0 n/a 00:04:48.170 00:04:48.170 Elapsed time = 0.204 seconds 00:04:48.170 00:04:48.170 real 0m0.217s 00:04:48.170 user 0m0.205s 00:04:48.170 sys 0m0.011s 00:04:48.170 17:15:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.170 17:15:27 -- common/autotest_common.sh@10 -- # set +x 00:04:48.170 ************************************ 00:04:48.170 END TEST env_memory 00:04:48.170 ************************************ 00:04:48.170 17:15:27 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.170 17:15:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.170 17:15:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.170 17:15:27 -- common/autotest_common.sh@10 -- # set +x 00:04:48.170 ************************************ 00:04:48.170 START TEST env_vtophys 00:04:48.170 ************************************ 00:04:48.170 17:15:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.170 EAL: lib.eal log level changed from notice to debug 00:04:48.170 EAL: Detected lcore 0 as core 0 on socket 0 00:04:48.170 EAL: Detected lcore 1 as core 1 on socket 0 00:04:48.170 EAL: Detected lcore 2 as core 2 on socket 0 00:04:48.170 EAL: Detected lcore 3 as core 3 on socket 0 00:04:48.170 EAL: Detected lcore 4 as core 4 on socket 0 00:04:48.170 EAL: Detected lcore 5 as core 5 on socket 0 00:04:48.170 EAL: Detected lcore 6 as core 6 on socket 0 00:04:48.170 EAL: Detected lcore 7 as core 8 on socket 0 00:04:48.170 EAL: Detected lcore 8 as core 9 on socket 0 00:04:48.170 EAL: Detected lcore 9 as core 10 on socket 0 00:04:48.170 EAL: Detected lcore 10 as core 11 on socket 0 00:04:48.170 EAL: Detected lcore 11 as core 12 on socket 0 00:04:48.170 EAL: Detected lcore 12 as core 13 on socket 0 00:04:48.170 EAL: Detected lcore 13 as core 14 on socket 0 00:04:48.170 EAL: Detected lcore 14 as core 16 on socket 0 00:04:48.170 EAL: Detected lcore 15 as core 17 on socket 0 00:04:48.170 EAL: Detected lcore 16 as core 18 on socket 0 00:04:48.170 EAL: Detected lcore 17 as core 19 on socket 0 00:04:48.170 EAL: Detected lcore 18 as core 20 on socket 0 00:04:48.170 EAL: Detected lcore 19 as core 21 on socket 0 00:04:48.170 EAL: Detected lcore 20 as core 22 on socket 0 00:04:48.170 EAL: Detected lcore 21 as core 24 on socket 0 00:04:48.170 EAL: Detected lcore 22 as core 25 on socket 0 00:04:48.170 EAL: Detected lcore 23 as core 26 on socket 0 00:04:48.170 EAL: Detected lcore 24 as core 27 on socket 0 00:04:48.170 EAL: Detected lcore 25 as core 28 on socket 0 00:04:48.170 EAL: Detected lcore 26 as core 29 on socket 0 00:04:48.170 EAL: Detected lcore 27 as core 30 on socket 0 00:04:48.170 EAL: Detected lcore 28 as core 0 on socket 1 00:04:48.170 EAL: Detected lcore 29 as core 1 on socket 1 00:04:48.170 EAL: Detected lcore 30 as core 2 on socket 1 00:04:48.170 EAL: Detected lcore 31 as core 3 on socket 1 00:04:48.170 EAL: Detected lcore 32 as core 4 on socket 1 00:04:48.170 EAL: Detected lcore 33 as core 5 on socket 1 00:04:48.170 EAL: Detected lcore 34 as core 6 on socket 1 00:04:48.170 EAL: Detected lcore 35 as core 8 on socket 1 00:04:48.170 EAL: Detected lcore 36 as core 9 on socket 1 00:04:48.170 EAL: Detected lcore 37 as core 10 on socket 1 00:04:48.170 EAL: Detected lcore 38 as core 11 on socket 1 00:04:48.170 EAL: Detected lcore 39 as core 12 on socket 1 00:04:48.170 EAL: Detected lcore 40 as core 13 on socket 1 00:04:48.170 EAL: Detected lcore 41 as core 14 on socket 1 00:04:48.170 EAL: Detected lcore 42 as core 16 on socket 1 00:04:48.170 EAL: Detected lcore 43 as core 17 on socket 1 00:04:48.170 EAL: Detected lcore 44 as core 18 on socket 1 00:04:48.170 EAL: Detected lcore 45 as core 19 on socket 1 00:04:48.170 EAL: Detected lcore 46 as core 20 on socket 1 00:04:48.170 EAL: Detected lcore 47 as core 21 on socket 1 00:04:48.170 EAL: Detected lcore 48 as core 22 on socket 1 00:04:48.170 EAL: Detected lcore 49 as core 24 on socket 1 00:04:48.170 EAL: Detected lcore 50 as core 25 on socket 1 00:04:48.170 EAL: Detected lcore 51 as core 26 on socket 1 00:04:48.170 EAL: Detected lcore 52 as core 27 on socket 1 00:04:48.170 EAL: Detected lcore 53 as core 28 on socket 1 00:04:48.170 EAL: Detected lcore 54 as core 29 on socket 1 00:04:48.170 EAL: Detected lcore 55 as core 30 on socket 1 00:04:48.170 EAL: Detected lcore 56 as core 0 on socket 0 00:04:48.170 EAL: Detected lcore 57 as core 1 on socket 0 00:04:48.170 EAL: Detected lcore 58 as core 2 on socket 0 00:04:48.170 EAL: Detected lcore 59 as core 3 on socket 0 00:04:48.170 EAL: Detected lcore 60 as core 4 on socket 0 00:04:48.170 EAL: Detected lcore 61 as core 5 on socket 0 00:04:48.170 EAL: Detected lcore 62 as core 6 on socket 0 00:04:48.170 EAL: Detected lcore 63 as core 8 on socket 0 00:04:48.170 EAL: Detected lcore 64 as core 9 on socket 0 00:04:48.170 EAL: Detected lcore 65 as core 10 on socket 0 00:04:48.170 EAL: Detected lcore 66 as core 11 on socket 0 00:04:48.170 EAL: Detected lcore 67 as core 12 on socket 0 00:04:48.170 EAL: Detected lcore 68 as core 13 on socket 0 00:04:48.170 EAL: Detected lcore 69 as core 14 on socket 0 00:04:48.170 EAL: Detected lcore 70 as core 16 on socket 0 00:04:48.170 EAL: Detected lcore 71 as core 17 on socket 0 00:04:48.170 EAL: Detected lcore 72 as core 18 on socket 0 00:04:48.170 EAL: Detected lcore 73 as core 19 on socket 0 00:04:48.170 EAL: Detected lcore 74 as core 20 on socket 0 00:04:48.170 EAL: Detected lcore 75 as core 21 on socket 0 00:04:48.170 EAL: Detected lcore 76 as core 22 on socket 0 00:04:48.170 EAL: Detected lcore 77 as core 24 on socket 0 00:04:48.170 EAL: Detected lcore 78 as core 25 on socket 0 00:04:48.170 EAL: Detected lcore 79 as core 26 on socket 0 00:04:48.170 EAL: Detected lcore 80 as core 27 on socket 0 00:04:48.170 EAL: Detected lcore 81 as core 28 on socket 0 00:04:48.170 EAL: Detected lcore 82 as core 29 on socket 0 00:04:48.170 EAL: Detected lcore 83 as core 30 on socket 0 00:04:48.170 EAL: Detected lcore 84 as core 0 on socket 1 00:04:48.170 EAL: Detected lcore 85 as core 1 on socket 1 00:04:48.170 EAL: Detected lcore 86 as core 2 on socket 1 00:04:48.170 EAL: Detected lcore 87 as core 3 on socket 1 00:04:48.170 EAL: Detected lcore 88 as core 4 on socket 1 00:04:48.170 EAL: Detected lcore 89 as core 5 on socket 1 00:04:48.170 EAL: Detected lcore 90 as core 6 on socket 1 00:04:48.170 EAL: Detected lcore 91 as core 8 on socket 1 00:04:48.170 EAL: Detected lcore 92 as core 9 on socket 1 00:04:48.170 EAL: Detected lcore 93 as core 10 on socket 1 00:04:48.170 EAL: Detected lcore 94 as core 11 on socket 1 00:04:48.170 EAL: Detected lcore 95 as core 12 on socket 1 00:04:48.170 EAL: Detected lcore 96 as core 13 on socket 1 00:04:48.170 EAL: Detected lcore 97 as core 14 on socket 1 00:04:48.170 EAL: Detected lcore 98 as core 16 on socket 1 00:04:48.170 EAL: Detected lcore 99 as core 17 on socket 1 00:04:48.170 EAL: Detected lcore 100 as core 18 on socket 1 00:04:48.170 EAL: Detected lcore 101 as core 19 on socket 1 00:04:48.170 EAL: Detected lcore 102 as core 20 on socket 1 00:04:48.170 EAL: Detected lcore 103 as core 21 on socket 1 00:04:48.170 EAL: Detected lcore 104 as core 22 on socket 1 00:04:48.170 EAL: Detected lcore 105 as core 24 on socket 1 00:04:48.170 EAL: Detected lcore 106 as core 25 on socket 1 00:04:48.170 EAL: Detected lcore 107 as core 26 on socket 1 00:04:48.170 EAL: Detected lcore 108 as core 27 on socket 1 00:04:48.170 EAL: Detected lcore 109 as core 28 on socket 1 00:04:48.170 EAL: Detected lcore 110 as core 29 on socket 1 00:04:48.170 EAL: Detected lcore 111 as core 30 on socket 1 00:04:48.170 EAL: Maximum logical cores by configuration: 128 00:04:48.170 EAL: Detected CPU lcores: 112 00:04:48.170 EAL: Detected NUMA nodes: 2 00:04:48.170 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:48.170 EAL: Detected shared linkage of DPDK 00:04:48.170 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:48.170 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:48.170 EAL: Registered [vdev] bus. 00:04:48.170 EAL: bus.vdev log level changed from disabled to notice 00:04:48.170 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:48.170 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:48.170 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:48.171 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:48.171 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:48.171 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:48.171 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:48.171 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:48.171 EAL: No shared files mode enabled, IPC will be disabled 00:04:48.171 EAL: No shared files mode enabled, IPC is disabled 00:04:48.171 EAL: Bus pci wants IOVA as 'DC' 00:04:48.171 EAL: Bus vdev wants IOVA as 'DC' 00:04:48.171 EAL: Buses did not request a specific IOVA mode. 00:04:48.171 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:48.171 EAL: Selected IOVA mode 'VA' 00:04:48.171 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.171 EAL: Probing VFIO support... 00:04:48.171 EAL: IOMMU type 1 (Type 1) is supported 00:04:48.171 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:48.171 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:48.171 EAL: VFIO support initialized 00:04:48.171 EAL: Ask a virtual area of 0x2e000 bytes 00:04:48.171 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:48.171 EAL: Setting up physically contiguous memory... 00:04:48.171 EAL: Setting maximum number of open files to 524288 00:04:48.171 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:48.171 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:48.171 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:48.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.171 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:48.171 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.171 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:48.171 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:48.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.171 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:48.171 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.171 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:48.171 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:48.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.171 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:48.171 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.171 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:48.171 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:48.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.171 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:48.171 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.171 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:48.171 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:48.171 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:48.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.171 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:48.171 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.171 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:48.171 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:48.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.171 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:48.171 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.171 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:48.171 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:48.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.171 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:48.171 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.171 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:48.171 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:48.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.171 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:48.171 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.171 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:48.171 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:48.171 EAL: Hugepages will be freed exactly as allocated. 00:04:48.171 EAL: No shared files mode enabled, IPC is disabled 00:04:48.171 EAL: No shared files mode enabled, IPC is disabled 00:04:48.171 EAL: TSC frequency is ~2200000 KHz 00:04:48.171 EAL: Main lcore 0 is ready (tid=7fa38a877a00;cpuset=[0]) 00:04:48.171 EAL: Trying to obtain current memory policy. 00:04:48.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.171 EAL: Restoring previous memory policy: 0 00:04:48.171 EAL: request: mp_malloc_sync 00:04:48.171 EAL: No shared files mode enabled, IPC is disabled 00:04:48.171 EAL: Heap on socket 0 was expanded by 2MB 00:04:48.171 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:48.171 EAL: probe driver: 8086:37d2 net_i40e 00:04:48.171 EAL: Not managed by a supported kernel driver, skipped 00:04:48.171 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:48.171 EAL: probe driver: 8086:37d2 net_i40e 00:04:48.171 EAL: Not managed by a supported kernel driver, skipped 00:04:48.171 EAL: No shared files mode enabled, IPC is disabled 00:04:48.171 EAL: No shared files mode enabled, IPC is disabled 00:04:48.171 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:48.171 EAL: Mem event callback 'spdk:(nil)' registered 00:04:48.430 00:04:48.430 00:04:48.430 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.430 http://cunit.sourceforge.net/ 00:04:48.430 00:04:48.430 00:04:48.430 Suite: components_suite 00:04:48.430 Test: vtophys_malloc_test ...passed 00:04:48.430 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.430 EAL: Restoring previous memory policy: 4 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was expanded by 4MB 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was shrunk by 4MB 00:04:48.430 EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.430 EAL: Restoring previous memory policy: 4 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was expanded by 6MB 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was shrunk by 6MB 00:04:48.430 EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.430 EAL: Restoring previous memory policy: 4 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was expanded by 10MB 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was shrunk by 10MB 00:04:48.430 EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.430 EAL: Restoring previous memory policy: 4 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was expanded by 18MB 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was shrunk by 18MB 00:04:48.430 EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.430 EAL: Restoring previous memory policy: 4 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was expanded by 34MB 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was shrunk by 34MB 00:04:48.430 EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.430 EAL: Restoring previous memory policy: 4 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was expanded by 66MB 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was shrunk by 66MB 00:04:48.430 EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.430 EAL: Restoring previous memory policy: 4 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.430 EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.430 EAL: Restoring previous memory policy: 4 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.430 EAL: request: mp_malloc_sync 00:04:48.430 EAL: No shared files mode enabled, IPC is disabled 00:04:48.430 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.430 EAL: Trying to obtain current memory policy. 00:04:48.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.687 EAL: Restoring previous memory policy: 4 00:04:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.687 EAL: request: mp_malloc_sync 00:04:48.687 EAL: No shared files mode enabled, IPC is disabled 00:04:48.687 EAL: Heap on socket 0 was expanded by 514MB 00:04:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.687 EAL: request: mp_malloc_sync 00:04:48.687 EAL: No shared files mode enabled, IPC is disabled 00:04:48.687 EAL: Heap on socket 0 was shrunk by 514MB 00:04:48.687 EAL: Trying to obtain current memory policy. 00:04:48.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.945 EAL: Restoring previous memory policy: 4 00:04:48.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.945 EAL: request: mp_malloc_sync 00:04:48.945 EAL: No shared files mode enabled, IPC is disabled 00:04:48.945 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.203 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.461 EAL: request: mp_malloc_sync 00:04:49.461 EAL: No shared files mode enabled, IPC is disabled 00:04:49.461 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.461 passed 00:04:49.461 00:04:49.461 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.461 suites 1 1 n/a 0 0 00:04:49.461 tests 2 2 2 0 0 00:04:49.462 asserts 497 497 497 0 n/a 00:04:49.462 00:04:49.462 Elapsed time = 1.014 seconds 00:04:49.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.462 EAL: request: mp_malloc_sync 00:04:49.462 EAL: No shared files mode enabled, IPC is disabled 00:04:49.462 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.462 EAL: No shared files mode enabled, IPC is disabled 00:04:49.462 EAL: No shared files mode enabled, IPC is disabled 00:04:49.462 EAL: No shared files mode enabled, IPC is disabled 00:04:49.462 00:04:49.462 real 0m1.147s 00:04:49.462 user 0m0.669s 00:04:49.462 sys 0m0.451s 00:04:49.462 17:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.462 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:49.462 ************************************ 00:04:49.462 END TEST env_vtophys 00:04:49.462 ************************************ 00:04:49.462 17:15:28 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.462 17:15:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.462 17:15:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.462 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:49.462 ************************************ 00:04:49.462 START TEST env_pci 00:04:49.462 ************************************ 00:04:49.462 17:15:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.462 00:04:49.462 00:04:49.462 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.462 http://cunit.sourceforge.net/ 00:04:49.462 00:04:49.462 00:04:49.462 Suite: pci 00:04:49.462 Test: pci_hook ...[2024-07-12 17:15:28.238168] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3913084 has claimed it 00:04:49.462 EAL: Cannot find device (10000:00:01.0) 00:04:49.462 EAL: Failed to attach device on primary process 00:04:49.462 passed 00:04:49.462 00:04:49.462 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.462 suites 1 1 n/a 0 0 00:04:49.462 tests 1 1 1 0 0 00:04:49.462 asserts 25 25 25 0 n/a 00:04:49.462 00:04:49.462 Elapsed time = 0.028 seconds 00:04:49.462 00:04:49.462 real 0m0.047s 00:04:49.462 user 0m0.012s 00:04:49.462 sys 0m0.034s 00:04:49.462 17:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.462 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:49.462 ************************************ 00:04:49.462 END TEST env_pci 00:04:49.462 ************************************ 00:04:49.462 17:15:28 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:49.462 17:15:28 -- env/env.sh@15 -- # uname 00:04:49.462 17:15:28 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:49.462 17:15:28 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:49.462 17:15:28 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.462 17:15:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:49.462 17:15:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.462 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:49.462 ************************************ 00:04:49.462 START TEST env_dpdk_post_init 00:04:49.462 ************************************ 00:04:49.462 17:15:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.462 EAL: Detected CPU lcores: 112 00:04:49.462 EAL: Detected NUMA nodes: 2 00:04:49.462 EAL: Detected shared linkage of DPDK 00:04:49.462 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.462 EAL: Selected IOVA mode 'VA' 00:04:49.462 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.462 EAL: VFIO support initialized 00:04:49.462 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.720 EAL: Using IOMMU type 1 (Type 1) 00:04:49.720 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:49.721 EAL: Ignore mapping IO port bar(1) 00:04:49.721 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:50.657 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:04:53.936 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:04:53.936 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:04:53.936 Starting DPDK initialization... 00:04:53.936 Starting SPDK post initialization... 00:04:53.936 SPDK NVMe probe 00:04:53.937 Attaching to 0000:86:00.0 00:04:53.937 Attached to 0000:86:00.0 00:04:53.937 Cleaning up... 00:04:53.937 00:04:53.937 real 0m4.442s 00:04:53.937 user 0m3.322s 00:04:53.937 sys 0m0.180s 00:04:53.937 17:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.937 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:04:53.937 ************************************ 00:04:53.937 END TEST env_dpdk_post_init 00:04:53.937 ************************************ 00:04:53.937 17:15:32 -- env/env.sh@26 -- # uname 00:04:53.937 17:15:32 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:53.937 17:15:32 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:53.937 17:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.937 17:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.937 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:04:53.937 ************************************ 00:04:53.937 START TEST env_mem_callbacks 00:04:53.937 ************************************ 00:04:53.937 17:15:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:53.937 EAL: Detected CPU lcores: 112 00:04:53.937 EAL: Detected NUMA nodes: 2 00:04:53.937 EAL: Detected shared linkage of DPDK 00:04:53.937 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.937 EAL: Selected IOVA mode 'VA' 00:04:53.937 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.937 EAL: VFIO support initialized 00:04:53.937 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:53.937 00:04:53.937 00:04:53.937 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.937 http://cunit.sourceforge.net/ 00:04:53.937 00:04:53.937 00:04:53.937 Suite: memory 00:04:53.937 Test: test ... 00:04:53.937 register 0x200000200000 2097152 00:04:53.937 malloc 3145728 00:04:53.937 register 0x200000400000 4194304 00:04:53.937 buf 0x200000500000 len 3145728 PASSED 00:04:53.937 malloc 64 00:04:53.937 buf 0x2000004fff40 len 64 PASSED 00:04:53.937 malloc 4194304 00:04:53.937 register 0x200000800000 6291456 00:04:53.937 buf 0x200000a00000 len 4194304 PASSED 00:04:53.937 free 0x200000500000 3145728 00:04:53.937 free 0x2000004fff40 64 00:04:53.937 unregister 0x200000400000 4194304 PASSED 00:04:53.937 free 0x200000a00000 4194304 00:04:53.937 unregister 0x200000800000 6291456 PASSED 00:04:53.937 malloc 8388608 00:04:53.937 register 0x200000400000 10485760 00:04:53.937 buf 0x200000600000 len 8388608 PASSED 00:04:53.937 free 0x200000600000 8388608 00:04:53.937 unregister 0x200000400000 10485760 PASSED 00:04:53.937 passed 00:04:53.937 00:04:53.937 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.937 suites 1 1 n/a 0 0 00:04:53.937 tests 1 1 1 0 0 00:04:53.937 asserts 15 15 15 0 n/a 00:04:53.937 00:04:53.937 Elapsed time = 0.007 seconds 00:04:53.937 00:04:53.937 real 0m0.061s 00:04:53.937 user 0m0.021s 00:04:53.937 sys 0m0.040s 00:04:53.937 17:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.937 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:04:53.937 ************************************ 00:04:53.937 END TEST env_mem_callbacks 00:04:53.937 ************************************ 00:04:53.937 00:04:53.937 real 0m6.197s 00:04:53.937 user 0m4.332s 00:04:53.937 sys 0m0.934s 00:04:53.937 17:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.937 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:04:53.937 ************************************ 00:04:53.937 END TEST env 00:04:53.937 ************************************ 00:04:54.194 17:15:32 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.194 17:15:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.194 17:15:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.194 17:15:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.194 ************************************ 00:04:54.194 START TEST rpc 00:04:54.194 ************************************ 00:04:54.194 17:15:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.194 * Looking for test storage... 00:04:54.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.194 17:15:33 -- rpc/rpc.sh@65 -- # spdk_pid=3914010 00:04:54.194 17:15:33 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.194 17:15:33 -- rpc/rpc.sh@67 -- # waitforlisten 3914010 00:04:54.194 17:15:33 -- common/autotest_common.sh@819 -- # '[' -z 3914010 ']' 00:04:54.194 17:15:33 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:54.194 17:15:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.194 17:15:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.194 17:15:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.194 17:15:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.194 17:15:33 -- common/autotest_common.sh@10 -- # set +x 00:04:54.194 [2024-07-12 17:15:33.109026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:54.194 [2024-07-12 17:15:33.109134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914010 ] 00:04:54.452 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.452 [2024-07-12 17:15:33.226938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.452 [2024-07-12 17:15:33.269933] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.452 [2024-07-12 17:15:33.270074] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:54.452 [2024-07-12 17:15:33.270085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3914010' to capture a snapshot of events at runtime. 00:04:54.452 [2024-07-12 17:15:33.270097] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3914010 for offline analysis/debug. 00:04:54.452 [2024-07-12 17:15:33.270124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.386 17:15:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.386 17:15:34 -- common/autotest_common.sh@852 -- # return 0 00:04:55.386 17:15:34 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.386 17:15:34 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.386 17:15:34 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:55.386 17:15:34 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:55.386 17:15:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.386 17:15:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.386 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.386 ************************************ 00:04:55.386 START TEST rpc_integrity 00:04:55.386 ************************************ 00:04:55.386 17:15:34 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:55.386 17:15:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:55.386 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.386 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.386 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.386 17:15:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:55.386 17:15:34 -- rpc/rpc.sh@13 -- # jq length 00:04:55.386 17:15:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:55.386 17:15:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:55.386 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.386 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.386 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.386 17:15:34 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:55.386 17:15:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:55.386 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.386 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.386 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.386 17:15:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:55.386 { 00:04:55.386 "name": "Malloc0", 00:04:55.386 "aliases": [ 00:04:55.386 "450972b0-d47c-42fb-831e-aa04bedfc9b9" 00:04:55.386 ], 00:04:55.386 "product_name": "Malloc disk", 00:04:55.386 "block_size": 512, 00:04:55.386 "num_blocks": 16384, 00:04:55.386 "uuid": "450972b0-d47c-42fb-831e-aa04bedfc9b9", 00:04:55.386 "assigned_rate_limits": { 00:04:55.386 "rw_ios_per_sec": 0, 00:04:55.386 "rw_mbytes_per_sec": 0, 00:04:55.386 "r_mbytes_per_sec": 0, 00:04:55.386 "w_mbytes_per_sec": 0 00:04:55.386 }, 00:04:55.386 "claimed": false, 00:04:55.386 "zoned": false, 00:04:55.386 "supported_io_types": { 00:04:55.386 "read": true, 00:04:55.386 "write": true, 00:04:55.386 "unmap": true, 00:04:55.386 "write_zeroes": true, 00:04:55.386 "flush": true, 00:04:55.386 "reset": true, 00:04:55.386 "compare": false, 00:04:55.386 "compare_and_write": false, 00:04:55.386 "abort": true, 00:04:55.386 "nvme_admin": false, 00:04:55.386 "nvme_io": false 00:04:55.386 }, 00:04:55.386 "memory_domains": [ 00:04:55.386 { 00:04:55.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.386 "dma_device_type": 2 00:04:55.386 } 00:04:55.386 ], 00:04:55.386 "driver_specific": {} 00:04:55.386 } 00:04:55.386 ]' 00:04:55.386 17:15:34 -- rpc/rpc.sh@17 -- # jq length 00:04:55.644 17:15:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:55.644 17:15:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:55.644 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 [2024-07-12 17:15:34.372333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:55.644 [2024-07-12 17:15:34.372372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:55.644 [2024-07-12 17:15:34.372387] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d908a0 00:04:55.644 [2024-07-12 17:15:34.372396] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:55.644 [2024-07-12 17:15:34.373920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:55.644 [2024-07-12 17:15:34.373945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:55.644 Passthru0 00:04:55.644 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.644 17:15:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:55.644 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.644 17:15:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:55.644 { 00:04:55.644 "name": "Malloc0", 00:04:55.644 "aliases": [ 00:04:55.644 "450972b0-d47c-42fb-831e-aa04bedfc9b9" 00:04:55.644 ], 00:04:55.644 "product_name": "Malloc disk", 00:04:55.644 "block_size": 512, 00:04:55.644 "num_blocks": 16384, 00:04:55.644 "uuid": "450972b0-d47c-42fb-831e-aa04bedfc9b9", 00:04:55.644 "assigned_rate_limits": { 00:04:55.644 "rw_ios_per_sec": 0, 00:04:55.644 "rw_mbytes_per_sec": 0, 00:04:55.644 "r_mbytes_per_sec": 0, 00:04:55.644 "w_mbytes_per_sec": 0 00:04:55.644 }, 00:04:55.644 "claimed": true, 00:04:55.644 "claim_type": "exclusive_write", 00:04:55.644 "zoned": false, 00:04:55.644 "supported_io_types": { 00:04:55.644 "read": true, 00:04:55.644 "write": true, 00:04:55.644 "unmap": true, 00:04:55.644 "write_zeroes": true, 00:04:55.644 "flush": true, 00:04:55.644 "reset": true, 00:04:55.644 "compare": false, 00:04:55.644 "compare_and_write": false, 00:04:55.644 "abort": true, 00:04:55.644 "nvme_admin": false, 00:04:55.644 "nvme_io": false 00:04:55.644 }, 00:04:55.644 "memory_domains": [ 00:04:55.644 { 00:04:55.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.644 "dma_device_type": 2 00:04:55.644 } 00:04:55.644 ], 00:04:55.644 "driver_specific": {} 00:04:55.644 }, 00:04:55.644 { 00:04:55.644 "name": "Passthru0", 00:04:55.644 "aliases": [ 00:04:55.644 "fde1a277-c03a-59a2-b6eb-1186061602b4" 00:04:55.644 ], 00:04:55.644 "product_name": "passthru", 00:04:55.644 "block_size": 512, 00:04:55.644 "num_blocks": 16384, 00:04:55.644 "uuid": "fde1a277-c03a-59a2-b6eb-1186061602b4", 00:04:55.644 "assigned_rate_limits": { 00:04:55.644 "rw_ios_per_sec": 0, 00:04:55.644 "rw_mbytes_per_sec": 0, 00:04:55.644 "r_mbytes_per_sec": 0, 00:04:55.644 "w_mbytes_per_sec": 0 00:04:55.644 }, 00:04:55.644 "claimed": false, 00:04:55.644 "zoned": false, 00:04:55.644 "supported_io_types": { 00:04:55.644 "read": true, 00:04:55.644 "write": true, 00:04:55.644 "unmap": true, 00:04:55.644 "write_zeroes": true, 00:04:55.644 "flush": true, 00:04:55.644 "reset": true, 00:04:55.644 "compare": false, 00:04:55.644 "compare_and_write": false, 00:04:55.644 "abort": true, 00:04:55.644 "nvme_admin": false, 00:04:55.644 "nvme_io": false 00:04:55.644 }, 00:04:55.644 "memory_domains": [ 00:04:55.644 { 00:04:55.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.644 "dma_device_type": 2 00:04:55.644 } 00:04:55.644 ], 00:04:55.644 "driver_specific": { 00:04:55.644 "passthru": { 00:04:55.644 "name": "Passthru0", 00:04:55.644 "base_bdev_name": "Malloc0" 00:04:55.644 } 00:04:55.644 } 00:04:55.644 } 00:04:55.644 ]' 00:04:55.644 17:15:34 -- rpc/rpc.sh@21 -- # jq length 00:04:55.644 17:15:34 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.644 17:15:34 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.644 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.644 17:15:34 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:55.644 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.644 17:15:34 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.644 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.644 17:15:34 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.644 17:15:34 -- rpc/rpc.sh@26 -- # jq length 00:04:55.644 17:15:34 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.644 00:04:55.644 real 0m0.311s 00:04:55.644 user 0m0.212s 00:04:55.644 sys 0m0.034s 00:04:55.644 17:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 ************************************ 00:04:55.644 END TEST rpc_integrity 00:04:55.644 ************************************ 00:04:55.644 17:15:34 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:55.644 17:15:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.644 17:15:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 ************************************ 00:04:55.644 START TEST rpc_plugins 00:04:55.644 ************************************ 00:04:55.644 17:15:34 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:55.644 17:15:34 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:55.644 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.644 17:15:34 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:55.644 17:15:34 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:55.644 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.644 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.901 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.901 17:15:34 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:55.901 { 00:04:55.901 "name": "Malloc1", 00:04:55.901 "aliases": [ 00:04:55.901 "6221a938-a7b0-4d1a-8bc8-58da5d4b0018" 00:04:55.901 ], 00:04:55.901 "product_name": "Malloc disk", 00:04:55.901 "block_size": 4096, 00:04:55.901 "num_blocks": 256, 00:04:55.901 "uuid": "6221a938-a7b0-4d1a-8bc8-58da5d4b0018", 00:04:55.901 "assigned_rate_limits": { 00:04:55.901 "rw_ios_per_sec": 0, 00:04:55.901 "rw_mbytes_per_sec": 0, 00:04:55.901 "r_mbytes_per_sec": 0, 00:04:55.901 "w_mbytes_per_sec": 0 00:04:55.901 }, 00:04:55.901 "claimed": false, 00:04:55.901 "zoned": false, 00:04:55.901 "supported_io_types": { 00:04:55.901 "read": true, 00:04:55.901 "write": true, 00:04:55.901 "unmap": true, 00:04:55.901 "write_zeroes": true, 00:04:55.901 "flush": true, 00:04:55.901 "reset": true, 00:04:55.901 "compare": false, 00:04:55.901 "compare_and_write": false, 00:04:55.901 "abort": true, 00:04:55.901 "nvme_admin": false, 00:04:55.901 "nvme_io": false 00:04:55.901 }, 00:04:55.902 "memory_domains": [ 00:04:55.902 { 00:04:55.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.902 "dma_device_type": 2 00:04:55.902 } 00:04:55.902 ], 00:04:55.902 "driver_specific": {} 00:04:55.902 } 00:04:55.902 ]' 00:04:55.902 17:15:34 -- rpc/rpc.sh@32 -- # jq length 00:04:55.902 17:15:34 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:55.902 17:15:34 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:55.902 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.902 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.902 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.902 17:15:34 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:55.902 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.902 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.902 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.902 17:15:34 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:55.902 17:15:34 -- rpc/rpc.sh@36 -- # jq length 00:04:55.902 17:15:34 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:55.902 00:04:55.902 real 0m0.137s 00:04:55.902 user 0m0.084s 00:04:55.902 sys 0m0.019s 00:04:55.902 17:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.902 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.902 ************************************ 00:04:55.902 END TEST rpc_plugins 00:04:55.902 ************************************ 00:04:55.902 17:15:34 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:55.902 17:15:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.902 17:15:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.902 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.902 ************************************ 00:04:55.902 START TEST rpc_trace_cmd_test 00:04:55.902 ************************************ 00:04:55.902 17:15:34 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:55.902 17:15:34 -- rpc/rpc.sh@40 -- # local info 00:04:55.902 17:15:34 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:55.902 17:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.902 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:55.902 17:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.902 17:15:34 -- rpc/rpc.sh@42 -- # info='{ 00:04:55.902 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3914010", 00:04:55.902 "tpoint_group_mask": "0x8", 00:04:55.902 "iscsi_conn": { 00:04:55.902 "mask": "0x2", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "scsi": { 00:04:55.902 "mask": "0x4", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "bdev": { 00:04:55.902 "mask": "0x8", 00:04:55.902 "tpoint_mask": "0xffffffffffffffff" 00:04:55.902 }, 00:04:55.902 "nvmf_rdma": { 00:04:55.902 "mask": "0x10", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "nvmf_tcp": { 00:04:55.902 "mask": "0x20", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "ftl": { 00:04:55.902 "mask": "0x40", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "blobfs": { 00:04:55.902 "mask": "0x80", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "dsa": { 00:04:55.902 "mask": "0x200", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "thread": { 00:04:55.902 "mask": "0x400", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "nvme_pcie": { 00:04:55.902 "mask": "0x800", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "iaa": { 00:04:55.902 "mask": "0x1000", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "nvme_tcp": { 00:04:55.902 "mask": "0x2000", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 }, 00:04:55.902 "bdev_nvme": { 00:04:55.902 "mask": "0x4000", 00:04:55.902 "tpoint_mask": "0x0" 00:04:55.902 } 00:04:55.902 }' 00:04:55.902 17:15:34 -- rpc/rpc.sh@43 -- # jq length 00:04:55.902 17:15:34 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:55.902 17:15:34 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:56.159 17:15:34 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:56.160 17:15:34 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:56.160 17:15:34 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:56.160 17:15:34 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:56.160 17:15:34 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:56.160 17:15:34 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:56.160 17:15:34 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:56.160 00:04:56.160 real 0m0.237s 00:04:56.160 user 0m0.198s 00:04:56.160 sys 0m0.029s 00:04:56.160 17:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.160 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.160 ************************************ 00:04:56.160 END TEST rpc_trace_cmd_test 00:04:56.160 ************************************ 00:04:56.160 17:15:35 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:56.160 17:15:35 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:56.160 17:15:35 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:56.160 17:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.160 17:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.160 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.160 ************************************ 00:04:56.160 START TEST rpc_daemon_integrity 00:04:56.160 ************************************ 00:04:56.160 17:15:35 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:56.160 17:15:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.160 17:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.160 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.160 17:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.160 17:15:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.160 17:15:35 -- rpc/rpc.sh@13 -- # jq length 00:04:56.160 17:15:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.160 17:15:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.160 17:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.160 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.160 17:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.160 17:15:35 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:56.160 17:15:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.160 17:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.160 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.160 17:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.160 17:15:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.160 { 00:04:56.160 "name": "Malloc2", 00:04:56.160 "aliases": [ 00:04:56.160 "10fd8f85-3e96-42bb-83ff-a99f27947d13" 00:04:56.160 ], 00:04:56.160 "product_name": "Malloc disk", 00:04:56.160 "block_size": 512, 00:04:56.160 "num_blocks": 16384, 00:04:56.160 "uuid": "10fd8f85-3e96-42bb-83ff-a99f27947d13", 00:04:56.160 "assigned_rate_limits": { 00:04:56.160 "rw_ios_per_sec": 0, 00:04:56.160 "rw_mbytes_per_sec": 0, 00:04:56.160 "r_mbytes_per_sec": 0, 00:04:56.160 "w_mbytes_per_sec": 0 00:04:56.160 }, 00:04:56.160 "claimed": false, 00:04:56.160 "zoned": false, 00:04:56.160 "supported_io_types": { 00:04:56.160 "read": true, 00:04:56.160 "write": true, 00:04:56.160 "unmap": true, 00:04:56.160 "write_zeroes": true, 00:04:56.160 "flush": true, 00:04:56.160 "reset": true, 00:04:56.160 "compare": false, 00:04:56.160 "compare_and_write": false, 00:04:56.160 "abort": true, 00:04:56.160 "nvme_admin": false, 00:04:56.160 "nvme_io": false 00:04:56.160 }, 00:04:56.160 "memory_domains": [ 00:04:56.160 { 00:04:56.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.160 "dma_device_type": 2 00:04:56.160 } 00:04:56.160 ], 00:04:56.160 "driver_specific": {} 00:04:56.160 } 00:04:56.160 ]' 00:04:56.160 17:15:35 -- rpc/rpc.sh@17 -- # jq length 00:04:56.418 17:15:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.418 17:15:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:56.418 17:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.418 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.418 [2024-07-12 17:15:35.170616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:56.418 [2024-07-12 17:15:35.170652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.418 [2024-07-12 17:15:35.170671] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f35cb0 00:04:56.418 [2024-07-12 17:15:35.170679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.418 [2024-07-12 17:15:35.172021] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.418 [2024-07-12 17:15:35.172045] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.418 Passthru0 00:04:56.418 17:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.418 17:15:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.418 17:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.418 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.418 17:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.418 17:15:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.418 { 00:04:56.418 "name": "Malloc2", 00:04:56.418 "aliases": [ 00:04:56.418 "10fd8f85-3e96-42bb-83ff-a99f27947d13" 00:04:56.418 ], 00:04:56.418 "product_name": "Malloc disk", 00:04:56.418 "block_size": 512, 00:04:56.418 "num_blocks": 16384, 00:04:56.418 "uuid": "10fd8f85-3e96-42bb-83ff-a99f27947d13", 00:04:56.418 "assigned_rate_limits": { 00:04:56.418 "rw_ios_per_sec": 0, 00:04:56.418 "rw_mbytes_per_sec": 0, 00:04:56.418 "r_mbytes_per_sec": 0, 00:04:56.418 "w_mbytes_per_sec": 0 00:04:56.418 }, 00:04:56.418 "claimed": true, 00:04:56.418 "claim_type": "exclusive_write", 00:04:56.418 "zoned": false, 00:04:56.418 "supported_io_types": { 00:04:56.418 "read": true, 00:04:56.418 "write": true, 00:04:56.418 "unmap": true, 00:04:56.418 "write_zeroes": true, 00:04:56.418 "flush": true, 00:04:56.418 "reset": true, 00:04:56.418 "compare": false, 00:04:56.418 "compare_and_write": false, 00:04:56.418 "abort": true, 00:04:56.418 "nvme_admin": false, 00:04:56.418 "nvme_io": false 00:04:56.418 }, 00:04:56.419 "memory_domains": [ 00:04:56.419 { 00:04:56.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.419 "dma_device_type": 2 00:04:56.419 } 00:04:56.419 ], 00:04:56.419 "driver_specific": {} 00:04:56.419 }, 00:04:56.419 { 00:04:56.419 "name": "Passthru0", 00:04:56.419 "aliases": [ 00:04:56.419 "edcbcd64-4600-583f-a5f1-e06d058ff761" 00:04:56.419 ], 00:04:56.419 "product_name": "passthru", 00:04:56.419 "block_size": 512, 00:04:56.419 "num_blocks": 16384, 00:04:56.419 "uuid": "edcbcd64-4600-583f-a5f1-e06d058ff761", 00:04:56.419 "assigned_rate_limits": { 00:04:56.419 "rw_ios_per_sec": 0, 00:04:56.419 "rw_mbytes_per_sec": 0, 00:04:56.419 "r_mbytes_per_sec": 0, 00:04:56.419 "w_mbytes_per_sec": 0 00:04:56.419 }, 00:04:56.419 "claimed": false, 00:04:56.419 "zoned": false, 00:04:56.419 "supported_io_types": { 00:04:56.419 "read": true, 00:04:56.419 "write": true, 00:04:56.419 "unmap": true, 00:04:56.419 "write_zeroes": true, 00:04:56.419 "flush": true, 00:04:56.419 "reset": true, 00:04:56.419 "compare": false, 00:04:56.419 "compare_and_write": false, 00:04:56.419 "abort": true, 00:04:56.419 "nvme_admin": false, 00:04:56.419 "nvme_io": false 00:04:56.419 }, 00:04:56.419 "memory_domains": [ 00:04:56.419 { 00:04:56.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.419 "dma_device_type": 2 00:04:56.419 } 00:04:56.419 ], 00:04:56.419 "driver_specific": { 00:04:56.419 "passthru": { 00:04:56.419 "name": "Passthru0", 00:04:56.419 "base_bdev_name": "Malloc2" 00:04:56.419 } 00:04:56.419 } 00:04:56.419 } 00:04:56.419 ]' 00:04:56.419 17:15:35 -- rpc/rpc.sh@21 -- # jq length 00:04:56.419 17:15:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.419 17:15:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.419 17:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.419 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.419 17:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.419 17:15:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:56.419 17:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.419 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.419 17:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.419 17:15:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.419 17:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.419 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.419 17:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.419 17:15:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.419 17:15:35 -- rpc/rpc.sh@26 -- # jq length 00:04:56.419 17:15:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.419 00:04:56.419 real 0m0.281s 00:04:56.419 user 0m0.178s 00:04:56.419 sys 0m0.040s 00:04:56.419 17:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.419 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.419 ************************************ 00:04:56.419 END TEST rpc_daemon_integrity 00:04:56.419 ************************************ 00:04:56.419 17:15:35 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:56.419 17:15:35 -- rpc/rpc.sh@84 -- # killprocess 3914010 00:04:56.419 17:15:35 -- common/autotest_common.sh@926 -- # '[' -z 3914010 ']' 00:04:56.419 17:15:35 -- common/autotest_common.sh@930 -- # kill -0 3914010 00:04:56.419 17:15:35 -- common/autotest_common.sh@931 -- # uname 00:04:56.419 17:15:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:56.419 17:15:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3914010 00:04:56.734 17:15:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:56.734 17:15:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:56.734 17:15:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3914010' 00:04:56.734 killing process with pid 3914010 00:04:56.734 17:15:35 -- common/autotest_common.sh@945 -- # kill 3914010 00:04:56.734 17:15:35 -- common/autotest_common.sh@950 -- # wait 3914010 00:04:57.032 00:04:57.032 real 0m2.779s 00:04:57.032 user 0m3.753s 00:04:57.032 sys 0m0.729s 00:04:57.032 17:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.032 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.032 ************************************ 00:04:57.032 END TEST rpc 00:04:57.032 ************************************ 00:04:57.032 17:15:35 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.032 17:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.032 17:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.032 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.032 ************************************ 00:04:57.032 START TEST rpc_client 00:04:57.032 ************************************ 00:04:57.032 17:15:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.032 * Looking for test storage... 00:04:57.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:57.032 17:15:35 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:57.032 OK 00:04:57.032 17:15:35 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.032 00:04:57.032 real 0m0.109s 00:04:57.032 user 0m0.045s 00:04:57.032 sys 0m0.072s 00:04:57.032 17:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.032 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.032 ************************************ 00:04:57.032 END TEST rpc_client 00:04:57.032 ************************************ 00:04:57.032 17:15:35 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.032 17:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.032 17:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.032 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.032 ************************************ 00:04:57.032 START TEST json_config 00:04:57.032 ************************************ 00:04:57.032 17:15:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.032 17:15:35 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.032 17:15:35 -- nvmf/common.sh@7 -- # uname -s 00:04:57.032 17:15:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.032 17:15:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.032 17:15:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.032 17:15:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.032 17:15:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.032 17:15:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.032 17:15:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.032 17:15:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.032 17:15:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.032 17:15:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.032 17:15:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:57.032 17:15:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:57.032 17:15:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.032 17:15:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.032 17:15:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.032 17:15:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.032 17:15:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.032 17:15:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.032 17:15:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.032 17:15:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.032 17:15:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.032 17:15:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.032 17:15:35 -- paths/export.sh@5 -- # export PATH 00:04:57.032 17:15:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.032 17:15:35 -- nvmf/common.sh@46 -- # : 0 00:04:57.032 17:15:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:57.032 17:15:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:57.032 17:15:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:57.032 17:15:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.032 17:15:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.032 17:15:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:57.032 17:15:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:57.032 17:15:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:57.032 17:15:35 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:57.032 17:15:35 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:57.032 17:15:35 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:57.032 17:15:35 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:57.032 17:15:35 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:57.033 17:15:35 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:57.033 17:15:35 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:57.033 17:15:35 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:57.033 17:15:35 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:57.033 17:15:35 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:57.033 17:15:35 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:57.033 17:15:35 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:57.033 17:15:35 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:57.033 17:15:35 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.033 17:15:35 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:57.033 INFO: JSON configuration test init 00:04:57.033 17:15:35 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:57.033 17:15:35 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:57.033 17:15:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:57.033 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.291 17:15:35 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:57.291 17:15:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:57.291 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.291 17:15:36 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:57.291 17:15:36 -- json_config/json_config.sh@98 -- # local app=target 00:04:57.291 17:15:36 -- json_config/json_config.sh@99 -- # shift 00:04:57.291 17:15:36 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:57.291 17:15:36 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:57.291 17:15:36 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:57.291 17:15:36 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:57.291 17:15:36 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:57.291 17:15:36 -- json_config/json_config.sh@111 -- # app_pid[$app]=3914748 00:04:57.291 17:15:36 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:57.291 Waiting for target to run... 00:04:57.291 17:15:36 -- json_config/json_config.sh@114 -- # waitforlisten 3914748 /var/tmp/spdk_tgt.sock 00:04:57.291 17:15:36 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:57.291 17:15:36 -- common/autotest_common.sh@819 -- # '[' -z 3914748 ']' 00:04:57.291 17:15:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.291 17:15:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.291 17:15:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.291 17:15:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.291 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:04:57.291 [2024-07-12 17:15:36.061535] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:57.291 [2024-07-12 17:15:36.061595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914748 ] 00:04:57.291 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.549 [2024-07-12 17:15:36.363027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.549 [2024-07-12 17:15:36.386772] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.549 [2024-07-12 17:15:36.386906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.115 17:15:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.115 17:15:36 -- common/autotest_common.sh@852 -- # return 0 00:04:58.115 17:15:36 -- json_config/json_config.sh@115 -- # echo '' 00:04:58.115 00:04:58.115 17:15:36 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:58.115 17:15:36 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:58.115 17:15:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:58.115 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:04:58.115 17:15:36 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:58.115 17:15:36 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:58.115 17:15:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:58.115 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:04:58.115 17:15:37 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:58.116 17:15:37 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:58.116 17:15:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:01.400 17:15:40 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:01.400 17:15:40 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:01.400 17:15:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:01.400 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:05:01.400 17:15:40 -- json_config/json_config.sh@48 -- # local ret=0 00:05:01.400 17:15:40 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:01.400 17:15:40 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:01.400 17:15:40 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:01.400 17:15:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:01.400 17:15:40 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:01.400 17:15:40 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:01.400 17:15:40 -- json_config/json_config.sh@51 -- # local get_types 00:05:01.400 17:15:40 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:01.400 17:15:40 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:01.400 17:15:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:01.400 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:05:01.400 17:15:40 -- json_config/json_config.sh@58 -- # return 0 00:05:01.400 17:15:40 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:01.400 17:15:40 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:01.400 17:15:40 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:01.400 17:15:40 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:01.400 17:15:40 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:01.400 17:15:40 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:01.400 17:15:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:01.400 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:05:01.401 17:15:40 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:01.401 17:15:40 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:01.401 17:15:40 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:01.401 17:15:40 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.401 17:15:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.659 MallocForNvmf0 00:05:01.659 17:15:40 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.659 17:15:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.917 MallocForNvmf1 00:05:01.917 17:15:40 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.917 17:15:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:02.175 [2024-07-12 17:15:40.998128] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.175 17:15:41 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:02.175 17:15:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:02.433 17:15:41 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:02.433 17:15:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:02.691 17:15:41 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.691 17:15:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.948 17:15:41 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.949 17:15:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:03.207 [2024-07-12 17:15:41.961269] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:03.207 17:15:41 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:03.207 17:15:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:03.207 17:15:41 -- common/autotest_common.sh@10 -- # set +x 00:05:03.207 17:15:42 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:03.207 17:15:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:03.207 17:15:42 -- common/autotest_common.sh@10 -- # set +x 00:05:03.207 17:15:42 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:03.207 17:15:42 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:03.207 17:15:42 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:03.465 MallocBdevForConfigChangeCheck 00:05:03.465 17:15:42 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:03.465 17:15:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:03.465 17:15:42 -- common/autotest_common.sh@10 -- # set +x 00:05:03.465 17:15:42 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:03.465 17:15:42 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.724 17:15:42 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:03.724 INFO: shutting down applications... 00:05:03.724 17:15:42 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:03.724 17:15:42 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:03.724 17:15:42 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:03.724 17:15:42 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:05.626 Calling clear_iscsi_subsystem 00:05:05.626 Calling clear_nvmf_subsystem 00:05:05.626 Calling clear_nbd_subsystem 00:05:05.626 Calling clear_ublk_subsystem 00:05:05.626 Calling clear_vhost_blk_subsystem 00:05:05.626 Calling clear_vhost_scsi_subsystem 00:05:05.626 Calling clear_scheduler_subsystem 00:05:05.626 Calling clear_bdev_subsystem 00:05:05.626 Calling clear_accel_subsystem 00:05:05.626 Calling clear_vmd_subsystem 00:05:05.626 Calling clear_sock_subsystem 00:05:05.626 Calling clear_iobuf_subsystem 00:05:05.626 17:15:44 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:05.626 17:15:44 -- json_config/json_config.sh@396 -- # count=100 00:05:05.626 17:15:44 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:05.626 17:15:44 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.626 17:15:44 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:05.626 17:15:44 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:05.884 17:15:44 -- json_config/json_config.sh@398 -- # break 00:05:05.884 17:15:44 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:05.884 17:15:44 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:05.884 17:15:44 -- json_config/json_config.sh@120 -- # local app=target 00:05:05.884 17:15:44 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:05.885 17:15:44 -- json_config/json_config.sh@124 -- # [[ -n 3914748 ]] 00:05:05.885 17:15:44 -- json_config/json_config.sh@127 -- # kill -SIGINT 3914748 00:05:05.885 17:15:44 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:05.885 17:15:44 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:05.885 17:15:44 -- json_config/json_config.sh@130 -- # kill -0 3914748 00:05:05.885 17:15:44 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:06.452 17:15:45 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:06.452 17:15:45 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:06.452 17:15:45 -- json_config/json_config.sh@130 -- # kill -0 3914748 00:05:06.452 17:15:45 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:06.452 17:15:45 -- json_config/json_config.sh@132 -- # break 00:05:06.452 17:15:45 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:06.452 17:15:45 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:06.452 SPDK target shutdown done 00:05:06.452 17:15:45 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:06.452 INFO: relaunching applications... 00:05:06.452 17:15:45 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.452 17:15:45 -- json_config/json_config.sh@98 -- # local app=target 00:05:06.452 17:15:45 -- json_config/json_config.sh@99 -- # shift 00:05:06.452 17:15:45 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:06.452 17:15:45 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:06.452 17:15:45 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:06.452 17:15:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:06.452 17:15:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:06.452 17:15:45 -- json_config/json_config.sh@111 -- # app_pid[$app]=3916726 00:05:06.452 17:15:45 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:06.452 Waiting for target to run... 00:05:06.452 17:15:45 -- json_config/json_config.sh@114 -- # waitforlisten 3916726 /var/tmp/spdk_tgt.sock 00:05:06.452 17:15:45 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.452 17:15:45 -- common/autotest_common.sh@819 -- # '[' -z 3916726 ']' 00:05:06.452 17:15:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.452 17:15:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:06.452 17:15:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.452 17:15:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:06.452 17:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:06.452 [2024-07-12 17:15:45.401261] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:06.452 [2024-07-12 17:15:45.401327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916726 ] 00:05:06.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.969 [2024-07-12 17:15:45.698166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.969 [2024-07-12 17:15:45.722103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.969 [2024-07-12 17:15:45.722238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.256 [2024-07-12 17:15:48.734730] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.256 [2024-07-12 17:15:48.767100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.514 17:15:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.514 17:15:49 -- common/autotest_common.sh@852 -- # return 0 00:05:10.514 17:15:49 -- json_config/json_config.sh@115 -- # echo '' 00:05:10.514 00:05:10.514 17:15:49 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:10.514 17:15:49 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:10.514 INFO: Checking if target configuration is the same... 00:05:10.514 17:15:49 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.514 17:15:49 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:10.514 17:15:49 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.514 + '[' 2 -ne 2 ']' 00:05:10.514 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:10.514 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:10.514 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.514 +++ basename /dev/fd/62 00:05:10.514 ++ mktemp /tmp/62.XXX 00:05:10.514 + tmp_file_1=/tmp/62.BEU 00:05:10.514 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.514 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.514 + tmp_file_2=/tmp/spdk_tgt_config.json.MxN 00:05:10.514 + ret=0 00:05:10.514 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.772 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.772 + diff -u /tmp/62.BEU /tmp/spdk_tgt_config.json.MxN 00:05:10.772 + echo 'INFO: JSON config files are the same' 00:05:10.772 INFO: JSON config files are the same 00:05:10.772 + rm /tmp/62.BEU /tmp/spdk_tgt_config.json.MxN 00:05:10.772 + exit 0 00:05:10.772 17:15:49 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:10.772 17:15:49 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:10.772 INFO: changing configuration and checking if this can be detected... 00:05:10.772 17:15:49 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.772 17:15:49 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.029 17:15:49 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.029 17:15:49 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:11.029 17:15:49 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.029 + '[' 2 -ne 2 ']' 00:05:11.029 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.029 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.029 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.029 +++ basename /dev/fd/62 00:05:11.029 ++ mktemp /tmp/62.XXX 00:05:11.029 + tmp_file_1=/tmp/62.qJf 00:05:11.029 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.029 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.029 + tmp_file_2=/tmp/spdk_tgt_config.json.ZwO 00:05:11.029 + ret=0 00:05:11.029 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.594 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.594 + diff -u /tmp/62.qJf /tmp/spdk_tgt_config.json.ZwO 00:05:11.594 + ret=1 00:05:11.594 + echo '=== Start of file: /tmp/62.qJf ===' 00:05:11.594 + cat /tmp/62.qJf 00:05:11.594 + echo '=== End of file: /tmp/62.qJf ===' 00:05:11.594 + echo '' 00:05:11.594 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZwO ===' 00:05:11.594 + cat /tmp/spdk_tgt_config.json.ZwO 00:05:11.594 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZwO ===' 00:05:11.594 + echo '' 00:05:11.594 + rm /tmp/62.qJf /tmp/spdk_tgt_config.json.ZwO 00:05:11.594 + exit 1 00:05:11.594 17:15:50 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:11.594 INFO: configuration change detected. 00:05:11.594 17:15:50 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:11.594 17:15:50 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:11.594 17:15:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.595 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:11.595 17:15:50 -- json_config/json_config.sh@360 -- # local ret=0 00:05:11.595 17:15:50 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:11.595 17:15:50 -- json_config/json_config.sh@370 -- # [[ -n 3916726 ]] 00:05:11.595 17:15:50 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:11.595 17:15:50 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:11.595 17:15:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.595 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:11.595 17:15:50 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:11.595 17:15:50 -- json_config/json_config.sh@246 -- # uname -s 00:05:11.595 17:15:50 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:11.595 17:15:50 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:11.595 17:15:50 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:11.595 17:15:50 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:11.595 17:15:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:11.595 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:11.595 17:15:50 -- json_config/json_config.sh@376 -- # killprocess 3916726 00:05:11.595 17:15:50 -- common/autotest_common.sh@926 -- # '[' -z 3916726 ']' 00:05:11.595 17:15:50 -- common/autotest_common.sh@930 -- # kill -0 3916726 00:05:11.595 17:15:50 -- common/autotest_common.sh@931 -- # uname 00:05:11.595 17:15:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:11.595 17:15:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3916726 00:05:11.595 17:15:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:11.595 17:15:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:11.595 17:15:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3916726' 00:05:11.595 killing process with pid 3916726 00:05:11.595 17:15:50 -- common/autotest_common.sh@945 -- # kill 3916726 00:05:11.595 17:15:50 -- common/autotest_common.sh@950 -- # wait 3916726 00:05:13.492 17:15:52 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.492 17:15:52 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:13.492 17:15:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:13.492 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.492 17:15:52 -- json_config/json_config.sh@381 -- # return 0 00:05:13.492 17:15:52 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:13.492 INFO: Success 00:05:13.492 00:05:13.492 real 0m16.159s 00:05:13.492 user 0m18.693s 00:05:13.492 sys 0m2.025s 00:05:13.492 17:15:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.492 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.492 ************************************ 00:05:13.492 END TEST json_config 00:05:13.492 ************************************ 00:05:13.492 17:15:52 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.492 17:15:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.492 17:15:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.492 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.492 ************************************ 00:05:13.492 START TEST json_config_extra_key 00:05:13.492 ************************************ 00:05:13.492 17:15:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.492 17:15:52 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.492 17:15:52 -- nvmf/common.sh@7 -- # uname -s 00:05:13.492 17:15:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.492 17:15:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.492 17:15:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.492 17:15:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.492 17:15:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.492 17:15:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.492 17:15:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.492 17:15:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.492 17:15:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.492 17:15:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.492 17:15:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:13.492 17:15:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:13.492 17:15:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.492 17:15:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.492 17:15:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.492 17:15:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.492 17:15:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.492 17:15:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.492 17:15:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.492 17:15:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.492 17:15:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.493 17:15:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.493 17:15:52 -- paths/export.sh@5 -- # export PATH 00:05:13.493 17:15:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.493 17:15:52 -- nvmf/common.sh@46 -- # : 0 00:05:13.493 17:15:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:13.493 17:15:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:13.493 17:15:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:13.493 17:15:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.493 17:15:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.493 17:15:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:13.493 17:15:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:13.493 17:15:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:13.493 INFO: launching applications... 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3917966 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:13.493 Waiting for target to run... 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3917966 /var/tmp/spdk_tgt.sock 00:05:13.493 17:15:52 -- common/autotest_common.sh@819 -- # '[' -z 3917966 ']' 00:05:13.493 17:15:52 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.493 17:15:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.493 17:15:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.493 17:15:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.493 17:15:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.493 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.493 [2024-07-12 17:15:52.253502] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:13.493 [2024-07-12 17:15:52.253570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917966 ] 00:05:13.493 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.750 [2024-07-12 17:15:52.711162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.011 [2024-07-12 17:15:52.745056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:14.011 [2024-07-12 17:15:52.745199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.271 17:15:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.271 17:15:53 -- common/autotest_common.sh@852 -- # return 0 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:14.271 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:14.271 INFO: shutting down applications... 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3917966 ]] 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3917966 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3917966 00:05:14.271 17:15:53 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:14.838 17:15:53 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:14.838 17:15:53 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:14.838 17:15:53 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3917966 00:05:14.838 17:15:53 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:14.838 17:15:53 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:14.838 17:15:53 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:14.838 17:15:53 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:14.838 SPDK target shutdown done 00:05:14.839 17:15:53 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:14.839 Success 00:05:14.839 00:05:14.839 real 0m1.503s 00:05:14.839 user 0m1.170s 00:05:14.839 sys 0m0.542s 00:05:14.839 17:15:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.839 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:14.839 ************************************ 00:05:14.839 END TEST json_config_extra_key 00:05:14.839 ************************************ 00:05:14.839 17:15:53 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.839 17:15:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.839 17:15:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.839 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:14.839 ************************************ 00:05:14.839 START TEST alias_rpc 00:05:14.839 ************************************ 00:05:14.839 17:15:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.839 * Looking for test storage... 00:05:14.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:14.839 17:15:53 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:14.839 17:15:53 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3918329 00:05:14.839 17:15:53 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3918329 00:05:14.839 17:15:53 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.839 17:15:53 -- common/autotest_common.sh@819 -- # '[' -z 3918329 ']' 00:05:14.839 17:15:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.839 17:15:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:14.839 17:15:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.839 17:15:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:14.839 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:14.839 [2024-07-12 17:15:53.797214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:14.839 [2024-07-12 17:15:53.797291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918329 ] 00:05:15.097 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.097 [2024-07-12 17:15:53.880271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.097 [2024-07-12 17:15:53.921876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.097 [2024-07-12 17:15:53.922035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.029 17:15:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.029 17:15:54 -- common/autotest_common.sh@852 -- # return 0 00:05:16.029 17:15:54 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.029 17:15:54 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3918329 00:05:16.029 17:15:54 -- common/autotest_common.sh@926 -- # '[' -z 3918329 ']' 00:05:16.029 17:15:54 -- common/autotest_common.sh@930 -- # kill -0 3918329 00:05:16.029 17:15:54 -- common/autotest_common.sh@931 -- # uname 00:05:16.029 17:15:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:16.029 17:15:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3918329 00:05:16.287 17:15:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:16.287 17:15:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:16.287 17:15:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3918329' 00:05:16.287 killing process with pid 3918329 00:05:16.287 17:15:55 -- common/autotest_common.sh@945 -- # kill 3918329 00:05:16.287 17:15:55 -- common/autotest_common.sh@950 -- # wait 3918329 00:05:16.545 00:05:16.545 real 0m1.693s 00:05:16.545 user 0m1.987s 00:05:16.545 sys 0m0.440s 00:05:16.545 17:15:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.545 17:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.545 ************************************ 00:05:16.545 END TEST alias_rpc 00:05:16.545 ************************************ 00:05:16.545 17:15:55 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:16.545 17:15:55 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.545 17:15:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.545 17:15:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.545 17:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.545 ************************************ 00:05:16.545 START TEST spdkcli_tcp 00:05:16.545 ************************************ 00:05:16.545 17:15:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.545 * Looking for test storage... 00:05:16.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:16.545 17:15:55 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:16.545 17:15:55 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:16.545 17:15:55 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:16.545 17:15:55 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:16.545 17:15:55 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:16.545 17:15:55 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:16.545 17:15:55 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:16.545 17:15:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:16.545 17:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.545 17:15:55 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3918811 00:05:16.545 17:15:55 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:16.545 17:15:55 -- spdkcli/tcp.sh@27 -- # waitforlisten 3918811 00:05:16.545 17:15:55 -- common/autotest_common.sh@819 -- # '[' -z 3918811 ']' 00:05:16.545 17:15:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.545 17:15:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:16.545 17:15:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.545 17:15:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:16.545 17:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.804 [2024-07-12 17:15:55.535891] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:16.804 [2024-07-12 17:15:55.535964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918811 ] 00:05:16.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.804 [2024-07-12 17:15:55.624954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.804 [2024-07-12 17:15:55.667964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.804 [2024-07-12 17:15:55.668157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.804 [2024-07-12 17:15:55.668162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.741 17:15:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.741 17:15:56 -- common/autotest_common.sh@852 -- # return 0 00:05:17.741 17:15:56 -- spdkcli/tcp.sh@31 -- # socat_pid=3918836 00:05:17.741 17:15:56 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:17.741 17:15:56 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:17.741 [ 00:05:17.741 "bdev_malloc_delete", 00:05:17.741 "bdev_malloc_create", 00:05:17.741 "bdev_null_resize", 00:05:17.741 "bdev_null_delete", 00:05:17.741 "bdev_null_create", 00:05:17.741 "bdev_nvme_cuse_unregister", 00:05:17.741 "bdev_nvme_cuse_register", 00:05:17.741 "bdev_opal_new_user", 00:05:17.741 "bdev_opal_set_lock_state", 00:05:17.741 "bdev_opal_delete", 00:05:17.741 "bdev_opal_get_info", 00:05:17.741 "bdev_opal_create", 00:05:17.741 "bdev_nvme_opal_revert", 00:05:17.741 "bdev_nvme_opal_init", 00:05:17.741 "bdev_nvme_send_cmd", 00:05:17.741 "bdev_nvme_get_path_iostat", 00:05:17.741 "bdev_nvme_get_mdns_discovery_info", 00:05:17.741 "bdev_nvme_stop_mdns_discovery", 00:05:17.741 "bdev_nvme_start_mdns_discovery", 00:05:17.741 "bdev_nvme_set_multipath_policy", 00:05:17.741 "bdev_nvme_set_preferred_path", 00:05:17.741 "bdev_nvme_get_io_paths", 00:05:17.741 "bdev_nvme_remove_error_injection", 00:05:17.741 "bdev_nvme_add_error_injection", 00:05:17.741 "bdev_nvme_get_discovery_info", 00:05:17.741 "bdev_nvme_stop_discovery", 00:05:17.741 "bdev_nvme_start_discovery", 00:05:17.741 "bdev_nvme_get_controller_health_info", 00:05:17.741 "bdev_nvme_disable_controller", 00:05:17.741 "bdev_nvme_enable_controller", 00:05:17.741 "bdev_nvme_reset_controller", 00:05:17.741 "bdev_nvme_get_transport_statistics", 00:05:17.741 "bdev_nvme_apply_firmware", 00:05:17.741 "bdev_nvme_detach_controller", 00:05:17.741 "bdev_nvme_get_controllers", 00:05:17.741 "bdev_nvme_attach_controller", 00:05:17.741 "bdev_nvme_set_hotplug", 00:05:17.741 "bdev_nvme_set_options", 00:05:17.741 "bdev_passthru_delete", 00:05:17.741 "bdev_passthru_create", 00:05:17.741 "bdev_lvol_grow_lvstore", 00:05:17.741 "bdev_lvol_get_lvols", 00:05:17.741 "bdev_lvol_get_lvstores", 00:05:17.741 "bdev_lvol_delete", 00:05:17.741 "bdev_lvol_set_read_only", 00:05:17.741 "bdev_lvol_resize", 00:05:17.741 "bdev_lvol_decouple_parent", 00:05:17.741 "bdev_lvol_inflate", 00:05:17.741 "bdev_lvol_rename", 00:05:17.741 "bdev_lvol_clone_bdev", 00:05:17.741 "bdev_lvol_clone", 00:05:17.741 "bdev_lvol_snapshot", 00:05:17.741 "bdev_lvol_create", 00:05:17.741 "bdev_lvol_delete_lvstore", 00:05:17.741 "bdev_lvol_rename_lvstore", 00:05:17.741 "bdev_lvol_create_lvstore", 00:05:17.741 "bdev_raid_set_options", 00:05:17.741 "bdev_raid_remove_base_bdev", 00:05:17.741 "bdev_raid_add_base_bdev", 00:05:17.741 "bdev_raid_delete", 00:05:17.741 "bdev_raid_create", 00:05:17.741 "bdev_raid_get_bdevs", 00:05:17.741 "bdev_error_inject_error", 00:05:17.741 "bdev_error_delete", 00:05:17.741 "bdev_error_create", 00:05:17.741 "bdev_split_delete", 00:05:17.741 "bdev_split_create", 00:05:17.741 "bdev_delay_delete", 00:05:17.741 "bdev_delay_create", 00:05:17.741 "bdev_delay_update_latency", 00:05:17.741 "bdev_zone_block_delete", 00:05:17.741 "bdev_zone_block_create", 00:05:17.741 "blobfs_create", 00:05:17.741 "blobfs_detect", 00:05:17.741 "blobfs_set_cache_size", 00:05:17.741 "bdev_aio_delete", 00:05:17.741 "bdev_aio_rescan", 00:05:17.741 "bdev_aio_create", 00:05:17.741 "bdev_ftl_set_property", 00:05:17.741 "bdev_ftl_get_properties", 00:05:17.741 "bdev_ftl_get_stats", 00:05:17.741 "bdev_ftl_unmap", 00:05:17.741 "bdev_ftl_unload", 00:05:17.741 "bdev_ftl_delete", 00:05:17.741 "bdev_ftl_load", 00:05:17.741 "bdev_ftl_create", 00:05:17.741 "bdev_virtio_attach_controller", 00:05:17.741 "bdev_virtio_scsi_get_devices", 00:05:17.741 "bdev_virtio_detach_controller", 00:05:17.741 "bdev_virtio_blk_set_hotplug", 00:05:17.741 "bdev_iscsi_delete", 00:05:17.741 "bdev_iscsi_create", 00:05:17.741 "bdev_iscsi_set_options", 00:05:17.742 "accel_error_inject_error", 00:05:17.742 "ioat_scan_accel_module", 00:05:17.742 "dsa_scan_accel_module", 00:05:17.742 "iaa_scan_accel_module", 00:05:17.742 "vfu_virtio_create_scsi_endpoint", 00:05:17.742 "vfu_virtio_scsi_remove_target", 00:05:17.742 "vfu_virtio_scsi_add_target", 00:05:17.742 "vfu_virtio_create_blk_endpoint", 00:05:17.742 "vfu_virtio_delete_endpoint", 00:05:17.742 "iscsi_set_options", 00:05:17.742 "iscsi_get_auth_groups", 00:05:17.742 "iscsi_auth_group_remove_secret", 00:05:17.742 "iscsi_auth_group_add_secret", 00:05:17.742 "iscsi_delete_auth_group", 00:05:17.742 "iscsi_create_auth_group", 00:05:17.742 "iscsi_set_discovery_auth", 00:05:17.742 "iscsi_get_options", 00:05:17.742 "iscsi_target_node_request_logout", 00:05:17.742 "iscsi_target_node_set_redirect", 00:05:17.742 "iscsi_target_node_set_auth", 00:05:17.742 "iscsi_target_node_add_lun", 00:05:17.742 "iscsi_get_connections", 00:05:17.742 "iscsi_portal_group_set_auth", 00:05:17.742 "iscsi_start_portal_group", 00:05:17.742 "iscsi_delete_portal_group", 00:05:17.742 "iscsi_create_portal_group", 00:05:17.742 "iscsi_get_portal_groups", 00:05:17.742 "iscsi_delete_target_node", 00:05:17.742 "iscsi_target_node_remove_pg_ig_maps", 00:05:17.742 "iscsi_target_node_add_pg_ig_maps", 00:05:17.742 "iscsi_create_target_node", 00:05:17.742 "iscsi_get_target_nodes", 00:05:17.742 "iscsi_delete_initiator_group", 00:05:17.742 "iscsi_initiator_group_remove_initiators", 00:05:17.742 "iscsi_initiator_group_add_initiators", 00:05:17.742 "iscsi_create_initiator_group", 00:05:17.742 "iscsi_get_initiator_groups", 00:05:17.742 "nvmf_set_crdt", 00:05:17.742 "nvmf_set_config", 00:05:17.742 "nvmf_set_max_subsystems", 00:05:17.742 "nvmf_subsystem_get_listeners", 00:05:17.742 "nvmf_subsystem_get_qpairs", 00:05:17.742 "nvmf_subsystem_get_controllers", 00:05:17.742 "nvmf_get_stats", 00:05:17.742 "nvmf_get_transports", 00:05:17.742 "nvmf_create_transport", 00:05:17.742 "nvmf_get_targets", 00:05:17.742 "nvmf_delete_target", 00:05:17.742 "nvmf_create_target", 00:05:17.742 "nvmf_subsystem_allow_any_host", 00:05:17.742 "nvmf_subsystem_remove_host", 00:05:17.742 "nvmf_subsystem_add_host", 00:05:17.742 "nvmf_subsystem_remove_ns", 00:05:17.742 "nvmf_subsystem_add_ns", 00:05:17.742 "nvmf_subsystem_listener_set_ana_state", 00:05:17.742 "nvmf_discovery_get_referrals", 00:05:17.742 "nvmf_discovery_remove_referral", 00:05:17.742 "nvmf_discovery_add_referral", 00:05:17.742 "nvmf_subsystem_remove_listener", 00:05:17.742 "nvmf_subsystem_add_listener", 00:05:17.742 "nvmf_delete_subsystem", 00:05:17.742 "nvmf_create_subsystem", 00:05:17.742 "nvmf_get_subsystems", 00:05:17.742 "env_dpdk_get_mem_stats", 00:05:17.742 "nbd_get_disks", 00:05:17.742 "nbd_stop_disk", 00:05:17.742 "nbd_start_disk", 00:05:17.742 "ublk_recover_disk", 00:05:17.742 "ublk_get_disks", 00:05:17.742 "ublk_stop_disk", 00:05:17.742 "ublk_start_disk", 00:05:17.742 "ublk_destroy_target", 00:05:17.742 "ublk_create_target", 00:05:17.742 "virtio_blk_create_transport", 00:05:17.742 "virtio_blk_get_transports", 00:05:17.742 "vhost_controller_set_coalescing", 00:05:17.742 "vhost_get_controllers", 00:05:17.742 "vhost_delete_controller", 00:05:17.742 "vhost_create_blk_controller", 00:05:17.742 "vhost_scsi_controller_remove_target", 00:05:17.742 "vhost_scsi_controller_add_target", 00:05:17.742 "vhost_start_scsi_controller", 00:05:17.742 "vhost_create_scsi_controller", 00:05:17.742 "thread_set_cpumask", 00:05:17.742 "framework_get_scheduler", 00:05:17.742 "framework_set_scheduler", 00:05:17.742 "framework_get_reactors", 00:05:17.742 "thread_get_io_channels", 00:05:17.742 "thread_get_pollers", 00:05:17.742 "thread_get_stats", 00:05:17.742 "framework_monitor_context_switch", 00:05:17.742 "spdk_kill_instance", 00:05:17.742 "log_enable_timestamps", 00:05:17.742 "log_get_flags", 00:05:17.742 "log_clear_flag", 00:05:17.742 "log_set_flag", 00:05:17.742 "log_get_level", 00:05:17.742 "log_set_level", 00:05:17.742 "log_get_print_level", 00:05:17.742 "log_set_print_level", 00:05:17.742 "framework_enable_cpumask_locks", 00:05:17.742 "framework_disable_cpumask_locks", 00:05:17.742 "framework_wait_init", 00:05:17.742 "framework_start_init", 00:05:17.742 "scsi_get_devices", 00:05:17.742 "bdev_get_histogram", 00:05:17.742 "bdev_enable_histogram", 00:05:17.742 "bdev_set_qos_limit", 00:05:17.742 "bdev_set_qd_sampling_period", 00:05:17.742 "bdev_get_bdevs", 00:05:17.742 "bdev_reset_iostat", 00:05:17.742 "bdev_get_iostat", 00:05:17.742 "bdev_examine", 00:05:17.742 "bdev_wait_for_examine", 00:05:17.742 "bdev_set_options", 00:05:17.742 "notify_get_notifications", 00:05:17.742 "notify_get_types", 00:05:17.742 "accel_get_stats", 00:05:17.742 "accel_set_options", 00:05:17.742 "accel_set_driver", 00:05:17.742 "accel_crypto_key_destroy", 00:05:17.742 "accel_crypto_keys_get", 00:05:17.742 "accel_crypto_key_create", 00:05:17.742 "accel_assign_opc", 00:05:17.742 "accel_get_module_info", 00:05:17.742 "accel_get_opc_assignments", 00:05:17.742 "vmd_rescan", 00:05:17.742 "vmd_remove_device", 00:05:17.742 "vmd_enable", 00:05:17.742 "sock_set_default_impl", 00:05:17.742 "sock_impl_set_options", 00:05:17.742 "sock_impl_get_options", 00:05:17.742 "iobuf_get_stats", 00:05:17.742 "iobuf_set_options", 00:05:17.742 "framework_get_pci_devices", 00:05:17.742 "framework_get_config", 00:05:17.742 "framework_get_subsystems", 00:05:17.742 "vfu_tgt_set_base_path", 00:05:17.742 "trace_get_info", 00:05:17.742 "trace_get_tpoint_group_mask", 00:05:17.742 "trace_disable_tpoint_group", 00:05:17.742 "trace_enable_tpoint_group", 00:05:17.742 "trace_clear_tpoint_mask", 00:05:17.742 "trace_set_tpoint_mask", 00:05:17.742 "spdk_get_version", 00:05:17.742 "rpc_get_methods" 00:05:17.742 ] 00:05:18.004 17:15:56 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.004 17:15:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:18.004 17:15:56 -- common/autotest_common.sh@10 -- # set +x 00:05:18.004 17:15:56 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.004 17:15:56 -- spdkcli/tcp.sh@38 -- # killprocess 3918811 00:05:18.004 17:15:56 -- common/autotest_common.sh@926 -- # '[' -z 3918811 ']' 00:05:18.004 17:15:56 -- common/autotest_common.sh@930 -- # kill -0 3918811 00:05:18.004 17:15:56 -- common/autotest_common.sh@931 -- # uname 00:05:18.004 17:15:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.004 17:15:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3918811 00:05:18.004 17:15:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:18.004 17:15:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:18.004 17:15:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3918811' 00:05:18.004 killing process with pid 3918811 00:05:18.004 17:15:56 -- common/autotest_common.sh@945 -- # kill 3918811 00:05:18.004 17:15:56 -- common/autotest_common.sh@950 -- # wait 3918811 00:05:18.262 00:05:18.262 real 0m1.729s 00:05:18.262 user 0m3.409s 00:05:18.262 sys 0m0.481s 00:05:18.262 17:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.262 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:05:18.262 ************************************ 00:05:18.262 END TEST spdkcli_tcp 00:05:18.262 ************************************ 00:05:18.262 17:15:57 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.262 17:15:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.262 17:15:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.262 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:05:18.262 ************************************ 00:05:18.262 START TEST dpdk_mem_utility 00:05:18.262 ************************************ 00:05:18.262 17:15:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.521 * Looking for test storage... 00:05:18.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:18.521 17:15:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:18.521 17:15:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3919153 00:05:18.521 17:15:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3919153 00:05:18.521 17:15:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.521 17:15:57 -- common/autotest_common.sh@819 -- # '[' -z 3919153 ']' 00:05:18.521 17:15:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.521 17:15:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:18.521 17:15:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.521 17:15:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:18.521 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:05:18.521 [2024-07-12 17:15:57.291176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:18.521 [2024-07-12 17:15:57.291241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919153 ] 00:05:18.521 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.521 [2024-07-12 17:15:57.374486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.521 [2024-07-12 17:15:57.416171] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:18.521 [2024-07-12 17:15:57.416353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.457 17:15:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:19.457 17:15:58 -- common/autotest_common.sh@852 -- # return 0 00:05:19.457 17:15:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:19.457 17:15:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:19.457 17:15:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.457 17:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:19.457 { 00:05:19.457 "filename": "/tmp/spdk_mem_dump.txt" 00:05:19.457 } 00:05:19.457 17:15:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.458 17:15:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.458 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:19.458 1 heaps totaling size 814.000000 MiB 00:05:19.458 size: 814.000000 MiB heap id: 0 00:05:19.458 end heaps---------- 00:05:19.458 8 mempools totaling size 598.116089 MiB 00:05:19.458 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:19.458 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:19.458 size: 84.521057 MiB name: bdev_io_3919153 00:05:19.458 size: 51.011292 MiB name: evtpool_3919153 00:05:19.458 size: 50.003479 MiB name: msgpool_3919153 00:05:19.458 size: 21.763794 MiB name: PDU_Pool 00:05:19.458 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:19.458 size: 0.026123 MiB name: Session_Pool 00:05:19.458 end mempools------- 00:05:19.458 6 memzones totaling size 4.142822 MiB 00:05:19.458 size: 1.000366 MiB name: RG_ring_0_3919153 00:05:19.458 size: 1.000366 MiB name: RG_ring_1_3919153 00:05:19.458 size: 1.000366 MiB name: RG_ring_4_3919153 00:05:19.458 size: 1.000366 MiB name: RG_ring_5_3919153 00:05:19.458 size: 0.125366 MiB name: RG_ring_2_3919153 00:05:19.458 size: 0.015991 MiB name: RG_ring_3_3919153 00:05:19.458 end memzones------- 00:05:19.458 17:15:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:19.717 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:19.717 list of free elements. size: 12.519348 MiB 00:05:19.717 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:19.717 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:19.717 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:19.717 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:19.717 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:19.717 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:19.717 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:19.717 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:19.717 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:19.717 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:19.717 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:19.717 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:19.717 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:19.717 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:19.717 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:19.717 list of standard malloc elements. size: 199.218079 MiB 00:05:19.717 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:19.717 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:19.717 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:19.717 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:19.717 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:19.717 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:19.717 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:19.717 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:19.717 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:19.717 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:19.717 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:19.717 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:19.717 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:19.717 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:19.717 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:19.717 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:19.717 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:19.717 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:19.717 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:19.718 list of memzone associated elements. size: 602.262573 MiB 00:05:19.718 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:19.718 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:19.718 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:19.718 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:19.718 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:19.718 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3919153_0 00:05:19.718 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:19.718 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3919153_0 00:05:19.718 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:19.718 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3919153_0 00:05:19.718 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:19.718 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:19.718 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:19.718 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:19.718 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:19.718 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3919153 00:05:19.718 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:19.718 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3919153 00:05:19.718 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:19.718 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3919153 00:05:19.718 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:19.718 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:19.718 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:19.718 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:19.718 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:19.718 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:19.718 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:19.718 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:19.718 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:19.718 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3919153 00:05:19.718 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:19.718 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3919153 00:05:19.718 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:19.718 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3919153 00:05:19.718 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:19.718 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3919153 00:05:19.718 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:19.718 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3919153 00:05:19.718 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:19.718 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:19.718 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:19.718 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:19.718 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:19.718 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:19.718 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:19.718 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3919153 00:05:19.718 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:19.718 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:19.718 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:19.718 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:19.718 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:19.718 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3919153 00:05:19.718 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:19.718 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:19.718 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:19.718 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3919153 00:05:19.718 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:19.718 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3919153 00:05:19.718 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:19.718 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:19.718 17:15:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:19.718 17:15:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3919153 00:05:19.718 17:15:58 -- common/autotest_common.sh@926 -- # '[' -z 3919153 ']' 00:05:19.718 17:15:58 -- common/autotest_common.sh@930 -- # kill -0 3919153 00:05:19.718 17:15:58 -- common/autotest_common.sh@931 -- # uname 00:05:19.718 17:15:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:19.718 17:15:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3919153 00:05:19.718 17:15:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:19.718 17:15:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:19.718 17:15:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3919153' 00:05:19.718 killing process with pid 3919153 00:05:19.718 17:15:58 -- common/autotest_common.sh@945 -- # kill 3919153 00:05:19.718 17:15:58 -- common/autotest_common.sh@950 -- # wait 3919153 00:05:19.978 00:05:19.978 real 0m1.678s 00:05:19.978 user 0m1.981s 00:05:19.978 sys 0m0.444s 00:05:19.978 17:15:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.978 17:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:19.978 ************************************ 00:05:19.978 END TEST dpdk_mem_utility 00:05:19.978 ************************************ 00:05:19.978 17:15:58 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.978 17:15:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.978 17:15:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.978 17:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:19.978 ************************************ 00:05:19.978 START TEST event 00:05:19.978 ************************************ 00:05:19.978 17:15:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.237 * Looking for test storage... 00:05:20.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.237 17:15:58 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:20.237 17:15:58 -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.237 17:15:58 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.237 17:15:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:20.237 17:15:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.237 17:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:20.237 ************************************ 00:05:20.237 START TEST event_perf 00:05:20.237 ************************************ 00:05:20.237 17:15:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.237 Running I/O for 1 seconds...[2024-07-12 17:15:58.987037] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:20.237 [2024-07-12 17:15:58.987116] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919474 ] 00:05:20.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.237 [2024-07-12 17:15:59.070492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.237 [2024-07-12 17:15:59.115718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.237 [2024-07-12 17:15:59.115833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.237 [2024-07-12 17:15:59.115952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.237 [2024-07-12 17:15:59.115953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.613 Running I/O for 1 seconds... 00:05:21.613 lcore 0: 100398 00:05:21.613 lcore 1: 100400 00:05:21.613 lcore 2: 100402 00:05:21.613 lcore 3: 100401 00:05:21.613 done. 00:05:21.613 00:05:21.613 real 0m1.217s 00:05:21.613 user 0m4.116s 00:05:21.613 sys 0m0.090s 00:05:21.613 17:16:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.613 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:21.613 ************************************ 00:05:21.613 END TEST event_perf 00:05:21.613 ************************************ 00:05:21.613 17:16:00 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.613 17:16:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:21.613 17:16:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.613 17:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:21.613 ************************************ 00:05:21.613 START TEST event_reactor 00:05:21.613 ************************************ 00:05:21.613 17:16:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.613 [2024-07-12 17:16:00.242300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:21.613 [2024-07-12 17:16:00.242388] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919763 ] 00:05:21.613 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.613 [2024-07-12 17:16:00.325784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.613 [2024-07-12 17:16:00.365259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.547 test_start 00:05:22.547 oneshot 00:05:22.547 tick 100 00:05:22.547 tick 100 00:05:22.547 tick 250 00:05:22.547 tick 100 00:05:22.547 tick 100 00:05:22.547 tick 100 00:05:22.547 tick 250 00:05:22.547 tick 500 00:05:22.547 tick 100 00:05:22.547 tick 100 00:05:22.547 tick 250 00:05:22.547 tick 100 00:05:22.547 tick 100 00:05:22.547 test_end 00:05:22.547 00:05:22.547 real 0m1.207s 00:05:22.547 user 0m1.111s 00:05:22.547 sys 0m0.090s 00:05:22.548 17:16:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.548 17:16:01 -- common/autotest_common.sh@10 -- # set +x 00:05:22.548 ************************************ 00:05:22.548 END TEST event_reactor 00:05:22.548 ************************************ 00:05:22.548 17:16:01 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.548 17:16:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:22.548 17:16:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.548 17:16:01 -- common/autotest_common.sh@10 -- # set +x 00:05:22.548 ************************************ 00:05:22.548 START TEST event_reactor_perf 00:05:22.548 ************************************ 00:05:22.548 17:16:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.548 [2024-07-12 17:16:01.489167] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:22.548 [2024-07-12 17:16:01.489249] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920045 ] 00:05:22.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.806 [2024-07-12 17:16:01.571571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.806 [2024-07-12 17:16:01.609865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.770 test_start 00:05:23.770 test_end 00:05:23.770 Performance: 308226 events per second 00:05:23.770 00:05:23.770 real 0m1.203s 00:05:23.770 user 0m1.104s 00:05:23.770 sys 0m0.093s 00:05:23.770 17:16:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.770 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:23.770 ************************************ 00:05:23.770 END TEST event_reactor_perf 00:05:23.770 ************************************ 00:05:23.770 17:16:02 -- event/event.sh@49 -- # uname -s 00:05:23.770 17:16:02 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.770 17:16:02 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.770 17:16:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.770 17:16:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.770 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:23.770 ************************************ 00:05:23.770 START TEST event_scheduler 00:05:23.770 ************************************ 00:05:23.770 17:16:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.033 * Looking for test storage... 00:05:24.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:24.033 17:16:02 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.033 17:16:02 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3920350 00:05:24.033 17:16:02 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.033 17:16:02 -- scheduler/scheduler.sh@37 -- # waitforlisten 3920350 00:05:24.033 17:16:02 -- common/autotest_common.sh@819 -- # '[' -z 3920350 ']' 00:05:24.033 17:16:02 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.033 17:16:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.033 17:16:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.033 17:16:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.033 17:16:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.033 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.033 [2024-07-12 17:16:02.873977] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:24.034 [2024-07-12 17:16:02.874086] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920350 ] 00:05:24.034 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.292 [2024-07-12 17:16:03.023829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.292 [2024-07-12 17:16:03.090348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.292 [2024-07-12 17:16:03.090382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.292 [2024-07-12 17:16:03.090495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.292 [2024-07-12 17:16:03.090501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.292 17:16:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:24.292 17:16:03 -- common/autotest_common.sh@852 -- # return 0 00:05:24.292 17:16:03 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.292 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.292 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 POWER: Env isn't set yet! 00:05:24.292 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:24.292 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.292 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.292 POWER: Attempting to initialise PSTAT power management... 00:05:24.292 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:24.292 POWER: Initialized successfully for lcore 0 power management 00:05:24.292 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:24.292 POWER: Initialized successfully for lcore 1 power management 00:05:24.292 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:24.292 POWER: Initialized successfully for lcore 2 power management 00:05:24.292 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:24.292 POWER: Initialized successfully for lcore 3 power management 00:05:24.292 [2024-07-12 17:16:03.166462] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.292 [2024-07-12 17:16:03.166476] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.292 [2024-07-12 17:16:03.166484] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.292 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.292 17:16:03 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.292 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.292 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 [2024-07-12 17:16:03.228202] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:24.292 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.292 17:16:03 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:24.292 17:16:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.292 17:16:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.292 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 ************************************ 00:05:24.292 START TEST scheduler_create_thread 00:05:24.292 ************************************ 00:05:24.292 17:16:03 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:24.292 17:16:03 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.292 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.292 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.292 2 00:05:24.292 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.292 17:16:03 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.292 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.292 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 3 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 4 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 5 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 6 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 7 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 8 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 9 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 10 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.550 17:16:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:24.550 17:16:03 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:24.550 17:16:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.550 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.484 17:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.484 17:16:04 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:25.484 17:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.484 17:16:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.859 17:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.860 17:16:05 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:26.860 17:16:05 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:26.860 17:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.860 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.795 17:16:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.795 00:05:27.795 real 0m3.380s 00:05:27.795 user 0m0.021s 00:05:27.795 sys 0m0.007s 00:05:27.795 17:16:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.795 17:16:06 -- common/autotest_common.sh@10 -- # set +x 00:05:27.795 ************************************ 00:05:27.795 END TEST scheduler_create_thread 00:05:27.795 ************************************ 00:05:27.795 17:16:06 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:27.795 17:16:06 -- scheduler/scheduler.sh@46 -- # killprocess 3920350 00:05:27.795 17:16:06 -- common/autotest_common.sh@926 -- # '[' -z 3920350 ']' 00:05:27.795 17:16:06 -- common/autotest_common.sh@930 -- # kill -0 3920350 00:05:27.795 17:16:06 -- common/autotest_common.sh@931 -- # uname 00:05:27.795 17:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.795 17:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3920350 00:05:27.795 17:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:27.795 17:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:27.795 17:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3920350' 00:05:27.795 killing process with pid 3920350 00:05:27.796 17:16:06 -- common/autotest_common.sh@945 -- # kill 3920350 00:05:27.796 17:16:06 -- common/autotest_common.sh@950 -- # wait 3920350 00:05:28.066 [2024-07-12 17:16:06.995997] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.369 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:28.369 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:28.369 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:28.369 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:28.369 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:28.369 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:28.369 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:28.369 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:28.369 00:05:28.369 real 0m4.492s 00:05:28.369 user 0m7.863s 00:05:28.369 sys 0m0.400s 00:05:28.369 17:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.369 17:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:28.369 ************************************ 00:05:28.369 END TEST event_scheduler 00:05:28.369 ************************************ 00:05:28.369 17:16:07 -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.369 17:16:07 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.369 17:16:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.369 17:16:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.369 17:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:28.369 ************************************ 00:05:28.369 START TEST app_repeat 00:05:28.369 ************************************ 00:05:28.369 17:16:07 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:28.369 17:16:07 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.369 17:16:07 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.369 17:16:07 -- event/event.sh@13 -- # local nbd_list 00:05:28.369 17:16:07 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.369 17:16:07 -- event/event.sh@14 -- # local bdev_list 00:05:28.369 17:16:07 -- event/event.sh@15 -- # local repeat_times=4 00:05:28.369 17:16:07 -- event/event.sh@17 -- # modprobe nbd 00:05:28.369 17:16:07 -- event/event.sh@19 -- # repeat_pid=3921194 00:05:28.369 17:16:07 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.369 17:16:07 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.369 17:16:07 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3921194' 00:05:28.369 Process app_repeat pid: 3921194 00:05:28.369 17:16:07 -- event/event.sh@23 -- # for i in {0..2} 00:05:28.369 17:16:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.369 spdk_app_start Round 0 00:05:28.369 17:16:07 -- event/event.sh@25 -- # waitforlisten 3921194 /var/tmp/spdk-nbd.sock 00:05:28.369 17:16:07 -- common/autotest_common.sh@819 -- # '[' -z 3921194 ']' 00:05:28.369 17:16:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.369 17:16:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.369 17:16:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.369 17:16:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.369 17:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:28.369 [2024-07-12 17:16:07.290180] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:28.369 [2024-07-12 17:16:07.290271] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921194 ] 00:05:28.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.628 [2024-07-12 17:16:07.359687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.628 [2024-07-12 17:16:07.401579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.628 [2024-07-12 17:16:07.401584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.628 17:16:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.628 17:16:07 -- common/autotest_common.sh@852 -- # return 0 00:05:28.628 17:16:07 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.888 Malloc0 00:05:28.888 17:16:07 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.146 Malloc1 00:05:29.146 17:16:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@12 -- # local i 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.146 17:16:07 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.405 /dev/nbd0 00:05:29.405 17:16:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.405 17:16:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.405 17:16:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:29.405 17:16:08 -- common/autotest_common.sh@857 -- # local i 00:05:29.405 17:16:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:29.405 17:16:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:29.405 17:16:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:29.405 17:16:08 -- common/autotest_common.sh@861 -- # break 00:05:29.405 17:16:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:29.405 17:16:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:29.405 17:16:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.405 1+0 records in 00:05:29.405 1+0 records out 00:05:29.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00011597 s, 35.3 MB/s 00:05:29.405 17:16:08 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.405 17:16:08 -- common/autotest_common.sh@874 -- # size=4096 00:05:29.405 17:16:08 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.405 17:16:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:29.405 17:16:08 -- common/autotest_common.sh@877 -- # return 0 00:05:29.405 17:16:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.405 17:16:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.405 17:16:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.664 /dev/nbd1 00:05:29.664 17:16:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.664 17:16:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.664 17:16:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:29.664 17:16:08 -- common/autotest_common.sh@857 -- # local i 00:05:29.664 17:16:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:29.664 17:16:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:29.664 17:16:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:29.664 17:16:08 -- common/autotest_common.sh@861 -- # break 00:05:29.664 17:16:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:29.664 17:16:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:29.664 17:16:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.664 1+0 records in 00:05:29.664 1+0 records out 00:05:29.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198915 s, 20.6 MB/s 00:05:29.664 17:16:08 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.664 17:16:08 -- common/autotest_common.sh@874 -- # size=4096 00:05:29.664 17:16:08 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.664 17:16:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:29.664 17:16:08 -- common/autotest_common.sh@877 -- # return 0 00:05:29.664 17:16:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.664 17:16:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.664 17:16:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.664 17:16:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.664 17:16:08 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.923 { 00:05:29.923 "nbd_device": "/dev/nbd0", 00:05:29.923 "bdev_name": "Malloc0" 00:05:29.923 }, 00:05:29.923 { 00:05:29.923 "nbd_device": "/dev/nbd1", 00:05:29.923 "bdev_name": "Malloc1" 00:05:29.923 } 00:05:29.923 ]' 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.923 { 00:05:29.923 "nbd_device": "/dev/nbd0", 00:05:29.923 "bdev_name": "Malloc0" 00:05:29.923 }, 00:05:29.923 { 00:05:29.923 "nbd_device": "/dev/nbd1", 00:05:29.923 "bdev_name": "Malloc1" 00:05:29.923 } 00:05:29.923 ]' 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.923 /dev/nbd1' 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.923 /dev/nbd1' 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.923 256+0 records in 00:05:29.923 256+0 records out 00:05:29.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100525 s, 104 MB/s 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.923 256+0 records in 00:05:29.923 256+0 records out 00:05:29.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019078 s, 55.0 MB/s 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.923 17:16:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.923 256+0 records in 00:05:29.923 256+0 records out 00:05:29.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203752 s, 51.5 MB/s 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@51 -- # local i 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.182 17:16:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@41 -- # break 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.441 17:16:09 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@41 -- # break 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.700 17:16:09 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.959 17:16:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.959 17:16:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.959 17:16:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@65 -- # true 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.218 17:16:09 -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.218 17:16:09 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.477 17:16:10 -- event/event.sh@35 -- # sleep 3 00:05:31.477 [2024-07-12 17:16:10.399593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.477 [2024-07-12 17:16:10.437619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.477 [2024-07-12 17:16:10.437624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.747 [2024-07-12 17:16:10.482601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.747 [2024-07-12 17:16:10.482646] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.278 17:16:13 -- event/event.sh@23 -- # for i in {0..2} 00:05:34.278 17:16:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.278 spdk_app_start Round 1 00:05:34.278 17:16:13 -- event/event.sh@25 -- # waitforlisten 3921194 /var/tmp/spdk-nbd.sock 00:05:34.278 17:16:13 -- common/autotest_common.sh@819 -- # '[' -z 3921194 ']' 00:05:34.278 17:16:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.278 17:16:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.278 17:16:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.278 17:16:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.278 17:16:13 -- common/autotest_common.sh@10 -- # set +x 00:05:34.537 17:16:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.537 17:16:13 -- common/autotest_common.sh@852 -- # return 0 00:05:34.537 17:16:13 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.795 Malloc0 00:05:34.795 17:16:13 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.054 Malloc1 00:05:35.054 17:16:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@12 -- # local i 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.054 17:16:13 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.313 /dev/nbd0 00:05:35.313 17:16:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.313 17:16:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.313 17:16:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:35.313 17:16:14 -- common/autotest_common.sh@857 -- # local i 00:05:35.313 17:16:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:35.313 17:16:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:35.313 17:16:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:35.313 17:16:14 -- common/autotest_common.sh@861 -- # break 00:05:35.313 17:16:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:35.313 17:16:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:35.313 17:16:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.313 1+0 records in 00:05:35.313 1+0 records out 00:05:35.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223738 s, 18.3 MB/s 00:05:35.313 17:16:14 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.313 17:16:14 -- common/autotest_common.sh@874 -- # size=4096 00:05:35.313 17:16:14 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.313 17:16:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:35.313 17:16:14 -- common/autotest_common.sh@877 -- # return 0 00:05:35.313 17:16:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.313 17:16:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.313 17:16:14 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.572 /dev/nbd1 00:05:35.572 17:16:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.572 17:16:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.572 17:16:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:35.572 17:16:14 -- common/autotest_common.sh@857 -- # local i 00:05:35.572 17:16:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:35.572 17:16:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:35.572 17:16:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:35.572 17:16:14 -- common/autotest_common.sh@861 -- # break 00:05:35.572 17:16:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:35.572 17:16:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:35.572 17:16:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.572 1+0 records in 00:05:35.572 1+0 records out 00:05:35.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209748 s, 19.5 MB/s 00:05:35.572 17:16:14 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.572 17:16:14 -- common/autotest_common.sh@874 -- # size=4096 00:05:35.572 17:16:14 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.572 17:16:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:35.572 17:16:14 -- common/autotest_common.sh@877 -- # return 0 00:05:35.572 17:16:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.572 17:16:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.572 17:16:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.572 17:16:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.572 17:16:14 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.831 { 00:05:35.831 "nbd_device": "/dev/nbd0", 00:05:35.831 "bdev_name": "Malloc0" 00:05:35.831 }, 00:05:35.831 { 00:05:35.831 "nbd_device": "/dev/nbd1", 00:05:35.831 "bdev_name": "Malloc1" 00:05:35.831 } 00:05:35.831 ]' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.831 { 00:05:35.831 "nbd_device": "/dev/nbd0", 00:05:35.831 "bdev_name": "Malloc0" 00:05:35.831 }, 00:05:35.831 { 00:05:35.831 "nbd_device": "/dev/nbd1", 00:05:35.831 "bdev_name": "Malloc1" 00:05:35.831 } 00:05:35.831 ]' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.831 /dev/nbd1' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.831 /dev/nbd1' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.831 256+0 records in 00:05:35.831 256+0 records out 00:05:35.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103679 s, 101 MB/s 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.831 256+0 records in 00:05:35.831 256+0 records out 00:05:35.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019177 s, 54.7 MB/s 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.831 256+0 records in 00:05:35.831 256+0 records out 00:05:35.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203186 s, 51.6 MB/s 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@51 -- # local i 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.831 17:16:14 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@41 -- # break 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.089 17:16:14 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@41 -- # break 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.348 17:16:15 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@65 -- # true 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.607 17:16:15 -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.607 17:16:15 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.866 17:16:15 -- event/event.sh@35 -- # sleep 3 00:05:37.125 [2024-07-12 17:16:15.855974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.125 [2024-07-12 17:16:15.894539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.125 [2024-07-12 17:16:15.894544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.125 [2024-07-12 17:16:15.939303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.125 [2024-07-12 17:16:15.939346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.414 17:16:18 -- event/event.sh@23 -- # for i in {0..2} 00:05:40.414 17:16:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.414 spdk_app_start Round 2 00:05:40.414 17:16:18 -- event/event.sh@25 -- # waitforlisten 3921194 /var/tmp/spdk-nbd.sock 00:05:40.414 17:16:18 -- common/autotest_common.sh@819 -- # '[' -z 3921194 ']' 00:05:40.414 17:16:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.414 17:16:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.414 17:16:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.414 17:16:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.414 17:16:18 -- common/autotest_common.sh@10 -- # set +x 00:05:40.414 17:16:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.414 17:16:18 -- common/autotest_common.sh@852 -- # return 0 00:05:40.414 17:16:18 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.414 Malloc0 00:05:40.414 17:16:19 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.414 Malloc1 00:05:40.673 17:16:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@12 -- # local i 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.673 /dev/nbd0 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.673 17:16:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.673 17:16:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:40.931 17:16:19 -- common/autotest_common.sh@857 -- # local i 00:05:40.931 17:16:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:40.931 17:16:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:40.931 17:16:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:40.931 17:16:19 -- common/autotest_common.sh@861 -- # break 00:05:40.931 17:16:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:40.931 17:16:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:40.931 17:16:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.932 1+0 records in 00:05:40.932 1+0 records out 00:05:40.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184667 s, 22.2 MB/s 00:05:40.932 17:16:19 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.932 17:16:19 -- common/autotest_common.sh@874 -- # size=4096 00:05:40.932 17:16:19 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.932 17:16:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:40.932 17:16:19 -- common/autotest_common.sh@877 -- # return 0 00:05:40.932 17:16:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.932 17:16:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.932 17:16:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.932 /dev/nbd1 00:05:41.191 17:16:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.191 17:16:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.191 17:16:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:41.191 17:16:19 -- common/autotest_common.sh@857 -- # local i 00:05:41.191 17:16:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:41.191 17:16:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:41.191 17:16:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:41.191 17:16:19 -- common/autotest_common.sh@861 -- # break 00:05:41.191 17:16:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:41.191 17:16:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:41.191 17:16:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.191 1+0 records in 00:05:41.191 1+0 records out 00:05:41.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201708 s, 20.3 MB/s 00:05:41.191 17:16:19 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.191 17:16:19 -- common/autotest_common.sh@874 -- # size=4096 00:05:41.191 17:16:19 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.191 17:16:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:41.191 17:16:19 -- common/autotest_common.sh@877 -- # return 0 00:05:41.191 17:16:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.191 17:16:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.191 17:16:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.191 17:16:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.191 17:16:19 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.450 { 00:05:41.450 "nbd_device": "/dev/nbd0", 00:05:41.450 "bdev_name": "Malloc0" 00:05:41.450 }, 00:05:41.450 { 00:05:41.450 "nbd_device": "/dev/nbd1", 00:05:41.450 "bdev_name": "Malloc1" 00:05:41.450 } 00:05:41.450 ]' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.450 { 00:05:41.450 "nbd_device": "/dev/nbd0", 00:05:41.450 "bdev_name": "Malloc0" 00:05:41.450 }, 00:05:41.450 { 00:05:41.450 "nbd_device": "/dev/nbd1", 00:05:41.450 "bdev_name": "Malloc1" 00:05:41.450 } 00:05:41.450 ]' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.450 /dev/nbd1' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.450 /dev/nbd1' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.450 256+0 records in 00:05:41.450 256+0 records out 00:05:41.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101191 s, 104 MB/s 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.450 256+0 records in 00:05:41.450 256+0 records out 00:05:41.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189905 s, 55.2 MB/s 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.450 256+0 records in 00:05:41.450 256+0 records out 00:05:41.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204043 s, 51.4 MB/s 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@51 -- # local i 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.450 17:16:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@41 -- # break 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.710 17:16:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@41 -- # break 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.969 17:16:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.228 17:16:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.228 17:16:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.228 17:16:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@65 -- # true 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.228 17:16:21 -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.228 17:16:21 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.487 17:16:21 -- event/event.sh@35 -- # sleep 3 00:05:42.745 [2024-07-12 17:16:21.476591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.745 [2024-07-12 17:16:21.514953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.745 [2024-07-12 17:16:21.514958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.745 [2024-07-12 17:16:21.559789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.745 [2024-07-12 17:16:21.559841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.027 17:16:24 -- event/event.sh@38 -- # waitforlisten 3921194 /var/tmp/spdk-nbd.sock 00:05:46.027 17:16:24 -- common/autotest_common.sh@819 -- # '[' -z 3921194 ']' 00:05:46.027 17:16:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.027 17:16:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.027 17:16:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.027 17:16:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.027 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.027 17:16:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.027 17:16:24 -- common/autotest_common.sh@852 -- # return 0 00:05:46.027 17:16:24 -- event/event.sh@39 -- # killprocess 3921194 00:05:46.027 17:16:24 -- common/autotest_common.sh@926 -- # '[' -z 3921194 ']' 00:05:46.027 17:16:24 -- common/autotest_common.sh@930 -- # kill -0 3921194 00:05:46.027 17:16:24 -- common/autotest_common.sh@931 -- # uname 00:05:46.027 17:16:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.028 17:16:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3921194 00:05:46.028 17:16:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.028 17:16:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.028 17:16:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3921194' 00:05:46.028 killing process with pid 3921194 00:05:46.028 17:16:24 -- common/autotest_common.sh@945 -- # kill 3921194 00:05:46.028 17:16:24 -- common/autotest_common.sh@950 -- # wait 3921194 00:05:46.028 spdk_app_start is called in Round 0. 00:05:46.028 Shutdown signal received, stop current app iteration 00:05:46.028 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:05:46.028 spdk_app_start is called in Round 1. 00:05:46.028 Shutdown signal received, stop current app iteration 00:05:46.028 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:05:46.028 spdk_app_start is called in Round 2. 00:05:46.028 Shutdown signal received, stop current app iteration 00:05:46.028 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:05:46.028 spdk_app_start is called in Round 3. 00:05:46.028 Shutdown signal received, stop current app iteration 00:05:46.028 17:16:24 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.028 17:16:24 -- event/event.sh@42 -- # return 0 00:05:46.028 00:05:46.028 real 0m17.502s 00:05:46.028 user 0m39.099s 00:05:46.028 sys 0m2.741s 00:05:46.028 17:16:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.028 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.028 ************************************ 00:05:46.028 END TEST app_repeat 00:05:46.028 ************************************ 00:05:46.028 17:16:24 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.028 17:16:24 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.028 17:16:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.028 17:16:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.028 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.028 ************************************ 00:05:46.028 START TEST cpu_locks 00:05:46.028 ************************************ 00:05:46.028 17:16:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.028 * Looking for test storage... 00:05:46.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.028 17:16:24 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.028 17:16:24 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.028 17:16:24 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.028 17:16:24 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.028 17:16:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.028 17:16:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.028 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.028 ************************************ 00:05:46.028 START TEST default_locks 00:05:46.028 ************************************ 00:05:46.028 17:16:24 -- common/autotest_common.sh@1104 -- # default_locks 00:05:46.028 17:16:24 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3924624 00:05:46.028 17:16:24 -- event/cpu_locks.sh@47 -- # waitforlisten 3924624 00:05:46.028 17:16:24 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.028 17:16:24 -- common/autotest_common.sh@819 -- # '[' -z 3924624 ']' 00:05:46.028 17:16:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.028 17:16:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.028 17:16:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.028 17:16:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.028 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.028 [2024-07-12 17:16:24.944546] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:46.028 [2024-07-12 17:16:24.944609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924624 ] 00:05:46.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.286 [2024-07-12 17:16:25.027926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.286 [2024-07-12 17:16:25.070025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.286 [2024-07-12 17:16:25.070175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.221 17:16:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.221 17:16:25 -- common/autotest_common.sh@852 -- # return 0 00:05:47.221 17:16:25 -- event/cpu_locks.sh@49 -- # locks_exist 3924624 00:05:47.221 17:16:25 -- event/cpu_locks.sh@22 -- # lslocks -p 3924624 00:05:47.221 17:16:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.221 lslocks: write error 00:05:47.221 17:16:26 -- event/cpu_locks.sh@50 -- # killprocess 3924624 00:05:47.221 17:16:26 -- common/autotest_common.sh@926 -- # '[' -z 3924624 ']' 00:05:47.221 17:16:26 -- common/autotest_common.sh@930 -- # kill -0 3924624 00:05:47.221 17:16:26 -- common/autotest_common.sh@931 -- # uname 00:05:47.221 17:16:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.221 17:16:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3924624 00:05:47.221 17:16:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.221 17:16:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.221 17:16:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3924624' 00:05:47.221 killing process with pid 3924624 00:05:47.221 17:16:26 -- common/autotest_common.sh@945 -- # kill 3924624 00:05:47.221 17:16:26 -- common/autotest_common.sh@950 -- # wait 3924624 00:05:47.789 17:16:26 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3924624 00:05:47.789 17:16:26 -- common/autotest_common.sh@640 -- # local es=0 00:05:47.789 17:16:26 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3924624 00:05:47.789 17:16:26 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:47.789 17:16:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:47.789 17:16:26 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:47.789 17:16:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:47.789 17:16:26 -- common/autotest_common.sh@643 -- # waitforlisten 3924624 00:05:47.789 17:16:26 -- common/autotest_common.sh@819 -- # '[' -z 3924624 ']' 00:05:47.789 17:16:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.789 17:16:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.789 17:16:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.789 17:16:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.789 17:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3924624) - No such process 00:05:47.789 ERROR: process (pid: 3924624) is no longer running 00:05:47.789 17:16:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.789 17:16:26 -- common/autotest_common.sh@852 -- # return 1 00:05:47.789 17:16:26 -- common/autotest_common.sh@643 -- # es=1 00:05:47.789 17:16:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:47.789 17:16:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:47.789 17:16:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:47.789 17:16:26 -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.789 17:16:26 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.789 17:16:26 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.789 17:16:26 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.789 00:05:47.789 real 0m1.586s 00:05:47.789 user 0m1.741s 00:05:47.789 sys 0m0.523s 00:05:47.789 17:16:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.789 17:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.789 ************************************ 00:05:47.789 END TEST default_locks 00:05:47.789 ************************************ 00:05:47.789 17:16:26 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.789 17:16:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.789 17:16:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.789 17:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.789 ************************************ 00:05:47.789 START TEST default_locks_via_rpc 00:05:47.789 ************************************ 00:05:47.789 17:16:26 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:47.789 17:16:26 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3924919 00:05:47.789 17:16:26 -- event/cpu_locks.sh@63 -- # waitforlisten 3924919 00:05:47.789 17:16:26 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.789 17:16:26 -- common/autotest_common.sh@819 -- # '[' -z 3924919 ']' 00:05:47.789 17:16:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.789 17:16:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.789 17:16:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.789 17:16:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.789 17:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.789 [2024-07-12 17:16:26.560848] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:47.789 [2024-07-12 17:16:26.560909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924919 ] 00:05:47.789 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.789 [2024-07-12 17:16:26.644288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.789 [2024-07-12 17:16:26.686507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.789 [2024-07-12 17:16:26.686657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.727 17:16:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.727 17:16:27 -- common/autotest_common.sh@852 -- # return 0 00:05:48.727 17:16:27 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.727 17:16:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.727 17:16:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.727 17:16:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.727 17:16:27 -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.727 17:16:27 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.727 17:16:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.727 17:16:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.727 17:16:27 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.727 17:16:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.727 17:16:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.727 17:16:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.727 17:16:27 -- event/cpu_locks.sh@71 -- # locks_exist 3924919 00:05:48.727 17:16:27 -- event/cpu_locks.sh@22 -- # lslocks -p 3924919 00:05:48.727 17:16:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.295 17:16:27 -- event/cpu_locks.sh@73 -- # killprocess 3924919 00:05:49.295 17:16:27 -- common/autotest_common.sh@926 -- # '[' -z 3924919 ']' 00:05:49.295 17:16:27 -- common/autotest_common.sh@930 -- # kill -0 3924919 00:05:49.295 17:16:27 -- common/autotest_common.sh@931 -- # uname 00:05:49.295 17:16:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.295 17:16:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3924919 00:05:49.295 17:16:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.295 17:16:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.295 17:16:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3924919' 00:05:49.295 killing process with pid 3924919 00:05:49.295 17:16:28 -- common/autotest_common.sh@945 -- # kill 3924919 00:05:49.295 17:16:28 -- common/autotest_common.sh@950 -- # wait 3924919 00:05:49.555 00:05:49.555 real 0m1.827s 00:05:49.555 user 0m2.004s 00:05:49.555 sys 0m0.593s 00:05:49.555 17:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.555 17:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.555 ************************************ 00:05:49.555 END TEST default_locks_via_rpc 00:05:49.555 ************************************ 00:05:49.555 17:16:28 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.555 17:16:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.555 17:16:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.555 17:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.555 ************************************ 00:05:49.555 START TEST non_locking_app_on_locked_coremask 00:05:49.555 ************************************ 00:05:49.555 17:16:28 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:49.555 17:16:28 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3925424 00:05:49.555 17:16:28 -- event/cpu_locks.sh@81 -- # waitforlisten 3925424 /var/tmp/spdk.sock 00:05:49.555 17:16:28 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.555 17:16:28 -- common/autotest_common.sh@819 -- # '[' -z 3925424 ']' 00:05:49.555 17:16:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.555 17:16:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.555 17:16:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.555 17:16:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.555 17:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.555 [2024-07-12 17:16:28.433472] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:49.555 [2024-07-12 17:16:28.433542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925424 ] 00:05:49.555 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.555 [2024-07-12 17:16:28.515303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.813 [2024-07-12 17:16:28.557564] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.813 [2024-07-12 17:16:28.557712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.749 17:16:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.749 17:16:29 -- common/autotest_common.sh@852 -- # return 0 00:05:50.749 17:16:29 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3925486 00:05:50.749 17:16:29 -- event/cpu_locks.sh@85 -- # waitforlisten 3925486 /var/tmp/spdk2.sock 00:05:50.749 17:16:29 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.749 17:16:29 -- common/autotest_common.sh@819 -- # '[' -z 3925486 ']' 00:05:50.749 17:16:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.749 17:16:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.749 17:16:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.749 17:16:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.749 17:16:29 -- common/autotest_common.sh@10 -- # set +x 00:05:50.749 [2024-07-12 17:16:29.406767] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:50.750 [2024-07-12 17:16:29.406826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925486 ] 00:05:50.750 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.750 [2024-07-12 17:16:29.516672] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.750 [2024-07-12 17:16:29.516700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.750 [2024-07-12 17:16:29.597991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.750 [2024-07-12 17:16:29.598144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.703 17:16:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.703 17:16:30 -- common/autotest_common.sh@852 -- # return 0 00:05:51.703 17:16:30 -- event/cpu_locks.sh@87 -- # locks_exist 3925424 00:05:51.703 17:16:30 -- event/cpu_locks.sh@22 -- # lslocks -p 3925424 00:05:51.703 17:16:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.270 lslocks: write error 00:05:52.270 17:16:31 -- event/cpu_locks.sh@89 -- # killprocess 3925424 00:05:52.270 17:16:31 -- common/autotest_common.sh@926 -- # '[' -z 3925424 ']' 00:05:52.270 17:16:31 -- common/autotest_common.sh@930 -- # kill -0 3925424 00:05:52.270 17:16:31 -- common/autotest_common.sh@931 -- # uname 00:05:52.270 17:16:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.270 17:16:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3925424 00:05:52.270 17:16:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:52.270 17:16:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:52.270 17:16:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3925424' 00:05:52.270 killing process with pid 3925424 00:05:52.270 17:16:31 -- common/autotest_common.sh@945 -- # kill 3925424 00:05:52.270 17:16:31 -- common/autotest_common.sh@950 -- # wait 3925424 00:05:52.836 17:16:31 -- event/cpu_locks.sh@90 -- # killprocess 3925486 00:05:52.836 17:16:31 -- common/autotest_common.sh@926 -- # '[' -z 3925486 ']' 00:05:52.836 17:16:31 -- common/autotest_common.sh@930 -- # kill -0 3925486 00:05:52.836 17:16:31 -- common/autotest_common.sh@931 -- # uname 00:05:52.836 17:16:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.836 17:16:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3925486 00:05:52.836 17:16:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:52.836 17:16:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:52.836 17:16:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3925486' 00:05:52.836 killing process with pid 3925486 00:05:52.836 17:16:31 -- common/autotest_common.sh@945 -- # kill 3925486 00:05:52.836 17:16:31 -- common/autotest_common.sh@950 -- # wait 3925486 00:05:53.400 00:05:53.400 real 0m3.731s 00:05:53.400 user 0m4.151s 00:05:53.400 sys 0m1.114s 00:05:53.400 17:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.400 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:53.400 ************************************ 00:05:53.400 END TEST non_locking_app_on_locked_coremask 00:05:53.400 ************************************ 00:05:53.400 17:16:32 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.400 17:16:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.400 17:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.400 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:53.400 ************************************ 00:05:53.400 START TEST locking_app_on_unlocked_coremask 00:05:53.400 ************************************ 00:05:53.400 17:16:32 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:53.400 17:16:32 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3926050 00:05:53.400 17:16:32 -- event/cpu_locks.sh@99 -- # waitforlisten 3926050 /var/tmp/spdk.sock 00:05:53.400 17:16:32 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.400 17:16:32 -- common/autotest_common.sh@819 -- # '[' -z 3926050 ']' 00:05:53.400 17:16:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.400 17:16:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.400 17:16:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.400 17:16:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.400 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:53.400 [2024-07-12 17:16:32.204067] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:53.400 [2024-07-12 17:16:32.204124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926050 ] 00:05:53.400 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.400 [2024-07-12 17:16:32.283627] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.400 [2024-07-12 17:16:32.283655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.400 [2024-07-12 17:16:32.325491] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.400 [2024-07-12 17:16:32.325641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.335 17:16:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.335 17:16:33 -- common/autotest_common.sh@852 -- # return 0 00:05:54.335 17:16:33 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.335 17:16:33 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3926315 00:05:54.335 17:16:33 -- event/cpu_locks.sh@103 -- # waitforlisten 3926315 /var/tmp/spdk2.sock 00:05:54.335 17:16:33 -- common/autotest_common.sh@819 -- # '[' -z 3926315 ']' 00:05:54.335 17:16:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.335 17:16:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.335 17:16:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.335 17:16:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.335 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:05:54.335 [2024-07-12 17:16:33.167490] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:54.335 [2024-07-12 17:16:33.167549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926315 ] 00:05:54.335 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.335 [2024-07-12 17:16:33.276303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.593 [2024-07-12 17:16:33.360040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.593 [2024-07-12 17:16:33.360188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.159 17:16:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.159 17:16:34 -- common/autotest_common.sh@852 -- # return 0 00:05:55.159 17:16:34 -- event/cpu_locks.sh@105 -- # locks_exist 3926315 00:05:55.159 17:16:34 -- event/cpu_locks.sh@22 -- # lslocks -p 3926315 00:05:55.159 17:16:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.418 lslocks: write error 00:05:55.418 17:16:34 -- event/cpu_locks.sh@107 -- # killprocess 3926050 00:05:55.418 17:16:34 -- common/autotest_common.sh@926 -- # '[' -z 3926050 ']' 00:05:55.418 17:16:34 -- common/autotest_common.sh@930 -- # kill -0 3926050 00:05:55.418 17:16:34 -- common/autotest_common.sh@931 -- # uname 00:05:55.418 17:16:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:55.418 17:16:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3926050 00:05:55.677 17:16:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:55.677 17:16:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:55.677 17:16:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3926050' 00:05:55.677 killing process with pid 3926050 00:05:55.677 17:16:34 -- common/autotest_common.sh@945 -- # kill 3926050 00:05:55.677 17:16:34 -- common/autotest_common.sh@950 -- # wait 3926050 00:05:56.245 17:16:35 -- event/cpu_locks.sh@108 -- # killprocess 3926315 00:05:56.245 17:16:35 -- common/autotest_common.sh@926 -- # '[' -z 3926315 ']' 00:05:56.246 17:16:35 -- common/autotest_common.sh@930 -- # kill -0 3926315 00:05:56.246 17:16:35 -- common/autotest_common.sh@931 -- # uname 00:05:56.246 17:16:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.246 17:16:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3926315 00:05:56.246 17:16:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.246 17:16:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.246 17:16:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3926315' 00:05:56.246 killing process with pid 3926315 00:05:56.246 17:16:35 -- common/autotest_common.sh@945 -- # kill 3926315 00:05:56.246 17:16:35 -- common/autotest_common.sh@950 -- # wait 3926315 00:05:56.505 00:05:56.505 real 0m3.239s 00:05:56.505 user 0m3.648s 00:05:56.505 sys 0m0.875s 00:05:56.505 17:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.505 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.505 ************************************ 00:05:56.505 END TEST locking_app_on_unlocked_coremask 00:05:56.505 ************************************ 00:05:56.505 17:16:35 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:56.505 17:16:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.505 17:16:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.505 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.505 ************************************ 00:05:56.505 START TEST locking_app_on_locked_coremask 00:05:56.505 ************************************ 00:05:56.505 17:16:35 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:56.505 17:16:35 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3926739 00:05:56.505 17:16:35 -- event/cpu_locks.sh@116 -- # waitforlisten 3926739 /var/tmp/spdk.sock 00:05:56.505 17:16:35 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.505 17:16:35 -- common/autotest_common.sh@819 -- # '[' -z 3926739 ']' 00:05:56.505 17:16:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.505 17:16:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.505 17:16:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.505 17:16:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.505 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.764 [2024-07-12 17:16:35.483672] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.764 [2024-07-12 17:16:35.483733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926739 ] 00:05:56.764 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.764 [2024-07-12 17:16:35.565632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.764 [2024-07-12 17:16:35.607759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.764 [2024-07-12 17:16:35.607909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.701 17:16:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.701 17:16:36 -- common/autotest_common.sh@852 -- # return 0 00:05:57.701 17:16:36 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3926888 00:05:57.701 17:16:36 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3926888 /var/tmp/spdk2.sock 00:05:57.701 17:16:36 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:57.701 17:16:36 -- common/autotest_common.sh@640 -- # local es=0 00:05:57.701 17:16:36 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3926888 /var/tmp/spdk2.sock 00:05:57.701 17:16:36 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:57.701 17:16:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.701 17:16:36 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:57.701 17:16:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.701 17:16:36 -- common/autotest_common.sh@643 -- # waitforlisten 3926888 /var/tmp/spdk2.sock 00:05:57.701 17:16:36 -- common/autotest_common.sh@819 -- # '[' -z 3926888 ']' 00:05:57.701 17:16:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.701 17:16:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.701 17:16:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.701 17:16:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.701 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.701 [2024-07-12 17:16:36.459578] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:57.701 [2024-07-12 17:16:36.459641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926888 ] 00:05:57.701 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.701 [2024-07-12 17:16:36.570863] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3926739 has claimed it. 00:05:57.701 [2024-07-12 17:16:36.570904] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3926888) - No such process 00:05:58.269 ERROR: process (pid: 3926888) is no longer running 00:05:58.269 17:16:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.269 17:16:37 -- common/autotest_common.sh@852 -- # return 1 00:05:58.269 17:16:37 -- common/autotest_common.sh@643 -- # es=1 00:05:58.269 17:16:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:58.269 17:16:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:58.269 17:16:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:58.269 17:16:37 -- event/cpu_locks.sh@122 -- # locks_exist 3926739 00:05:58.269 17:16:37 -- event/cpu_locks.sh@22 -- # lslocks -p 3926739 00:05:58.269 17:16:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.880 lslocks: write error 00:05:58.880 17:16:37 -- event/cpu_locks.sh@124 -- # killprocess 3926739 00:05:58.880 17:16:37 -- common/autotest_common.sh@926 -- # '[' -z 3926739 ']' 00:05:58.880 17:16:37 -- common/autotest_common.sh@930 -- # kill -0 3926739 00:05:58.880 17:16:37 -- common/autotest_common.sh@931 -- # uname 00:05:58.880 17:16:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:58.880 17:16:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3926739 00:05:58.880 17:16:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:58.880 17:16:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:58.880 17:16:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3926739' 00:05:58.880 killing process with pid 3926739 00:05:58.880 17:16:37 -- common/autotest_common.sh@945 -- # kill 3926739 00:05:58.880 17:16:37 -- common/autotest_common.sh@950 -- # wait 3926739 00:05:59.156 00:05:59.156 real 0m2.534s 00:05:59.156 user 0m2.924s 00:05:59.156 sys 0m0.700s 00:05:59.156 17:16:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.156 17:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:59.156 ************************************ 00:05:59.156 END TEST locking_app_on_locked_coremask 00:05:59.156 ************************************ 00:05:59.156 17:16:37 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:59.156 17:16:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.156 17:16:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.156 17:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:59.156 ************************************ 00:05:59.156 START TEST locking_overlapped_coremask 00:05:59.156 ************************************ 00:05:59.156 17:16:38 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:59.156 17:16:38 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3927190 00:05:59.156 17:16:38 -- event/cpu_locks.sh@133 -- # waitforlisten 3927190 /var/tmp/spdk.sock 00:05:59.156 17:16:38 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:59.156 17:16:38 -- common/autotest_common.sh@819 -- # '[' -z 3927190 ']' 00:05:59.156 17:16:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.156 17:16:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.156 17:16:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.156 17:16:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.156 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.156 [2024-07-12 17:16:38.058076] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:59.156 [2024-07-12 17:16:38.058142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927190 ] 00:05:59.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.450 [2024-07-12 17:16:38.141763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.450 [2024-07-12 17:16:38.182819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.450 [2024-07-12 17:16:38.183030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.450 [2024-07-12 17:16:38.183144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.450 [2024-07-12 17:16:38.183147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.018 17:16:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.018 17:16:38 -- common/autotest_common.sh@852 -- # return 0 00:06:00.018 17:16:38 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:00.018 17:16:38 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3927461 00:06:00.018 17:16:38 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3927461 /var/tmp/spdk2.sock 00:06:00.018 17:16:38 -- common/autotest_common.sh@640 -- # local es=0 00:06:00.018 17:16:38 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3927461 /var/tmp/spdk2.sock 00:06:00.018 17:16:38 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:00.018 17:16:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:00.018 17:16:38 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:00.018 17:16:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:00.018 17:16:38 -- common/autotest_common.sh@643 -- # waitforlisten 3927461 /var/tmp/spdk2.sock 00:06:00.018 17:16:38 -- common/autotest_common.sh@819 -- # '[' -z 3927461 ']' 00:06:00.018 17:16:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.018 17:16:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.018 17:16:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.018 17:16:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.018 17:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:00.277 [2024-07-12 17:16:39.029653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:00.277 [2024-07-12 17:16:39.029716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927461 ] 00:06:00.277 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.277 [2024-07-12 17:16:39.108733] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3927190 has claimed it. 00:06:00.277 [2024-07-12 17:16:39.108770] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3927461) - No such process 00:06:00.845 ERROR: process (pid: 3927461) is no longer running 00:06:00.845 17:16:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.845 17:16:39 -- common/autotest_common.sh@852 -- # return 1 00:06:00.845 17:16:39 -- common/autotest_common.sh@643 -- # es=1 00:06:00.845 17:16:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:00.845 17:16:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:00.845 17:16:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:00.845 17:16:39 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:00.845 17:16:39 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:00.845 17:16:39 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:00.845 17:16:39 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:00.845 17:16:39 -- event/cpu_locks.sh@141 -- # killprocess 3927190 00:06:00.845 17:16:39 -- common/autotest_common.sh@926 -- # '[' -z 3927190 ']' 00:06:00.845 17:16:39 -- common/autotest_common.sh@930 -- # kill -0 3927190 00:06:00.845 17:16:39 -- common/autotest_common.sh@931 -- # uname 00:06:00.845 17:16:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.845 17:16:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3927190 00:06:00.845 17:16:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:00.845 17:16:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:00.845 17:16:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3927190' 00:06:00.845 killing process with pid 3927190 00:06:00.845 17:16:39 -- common/autotest_common.sh@945 -- # kill 3927190 00:06:00.845 17:16:39 -- common/autotest_common.sh@950 -- # wait 3927190 00:06:01.413 00:06:01.413 real 0m2.083s 00:06:01.413 user 0m6.073s 00:06:01.413 sys 0m0.452s 00:06:01.413 17:16:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.413 17:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.413 ************************************ 00:06:01.413 END TEST locking_overlapped_coremask 00:06:01.413 ************************************ 00:06:01.413 17:16:40 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:01.413 17:16:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.413 17:16:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.413 17:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.413 ************************************ 00:06:01.413 START TEST locking_overlapped_coremask_via_rpc 00:06:01.413 ************************************ 00:06:01.413 17:16:40 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:01.413 17:16:40 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3927751 00:06:01.413 17:16:40 -- event/cpu_locks.sh@149 -- # waitforlisten 3927751 /var/tmp/spdk.sock 00:06:01.413 17:16:40 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:01.413 17:16:40 -- common/autotest_common.sh@819 -- # '[' -z 3927751 ']' 00:06:01.413 17:16:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.413 17:16:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.413 17:16:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.413 17:16:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.413 17:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.413 [2024-07-12 17:16:40.177996] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:01.413 [2024-07-12 17:16:40.178059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927751 ] 00:06:01.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.413 [2024-07-12 17:16:40.258477] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.414 [2024-07-12 17:16:40.258511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.414 [2024-07-12 17:16:40.303000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.414 [2024-07-12 17:16:40.303183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.414 [2024-07-12 17:16:40.303304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.414 [2024-07-12 17:16:40.303308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.351 17:16:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.351 17:16:41 -- common/autotest_common.sh@852 -- # return 0 00:06:02.351 17:16:41 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3927839 00:06:02.351 17:16:41 -- event/cpu_locks.sh@153 -- # waitforlisten 3927839 /var/tmp/spdk2.sock 00:06:02.351 17:16:41 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.351 17:16:41 -- common/autotest_common.sh@819 -- # '[' -z 3927839 ']' 00:06:02.351 17:16:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.351 17:16:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.351 17:16:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.351 17:16:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.351 17:16:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.351 [2024-07-12 17:16:41.153617] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:02.351 [2024-07-12 17:16:41.153680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927839 ] 00:06:02.351 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.351 [2024-07-12 17:16:41.238500] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.351 [2024-07-12 17:16:41.238526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.351 [2024-07-12 17:16:41.312882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.351 [2024-07-12 17:16:41.313027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.610 [2024-07-12 17:16:41.319298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.610 [2024-07-12 17:16:41.319299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.546 17:16:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.546 17:16:42 -- common/autotest_common.sh@852 -- # return 0 00:06:03.546 17:16:42 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.546 17:16:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.546 17:16:42 -- common/autotest_common.sh@10 -- # set +x 00:06:03.546 17:16:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.546 17:16:42 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.546 17:16:42 -- common/autotest_common.sh@640 -- # local es=0 00:06:03.546 17:16:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.546 17:16:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:03.546 17:16:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:03.546 17:16:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:03.546 17:16:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:03.546 17:16:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.546 17:16:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.546 17:16:42 -- common/autotest_common.sh@10 -- # set +x 00:06:03.546 [2024-07-12 17:16:42.220317] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3927751 has claimed it. 00:06:03.546 request: 00:06:03.546 { 00:06:03.546 "method": "framework_enable_cpumask_locks", 00:06:03.546 "req_id": 1 00:06:03.546 } 00:06:03.546 Got JSON-RPC error response 00:06:03.546 response: 00:06:03.546 { 00:06:03.546 "code": -32603, 00:06:03.546 "message": "Failed to claim CPU core: 2" 00:06:03.546 } 00:06:03.546 17:16:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:03.546 17:16:42 -- common/autotest_common.sh@643 -- # es=1 00:06:03.546 17:16:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:03.546 17:16:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:03.546 17:16:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:03.546 17:16:42 -- event/cpu_locks.sh@158 -- # waitforlisten 3927751 /var/tmp/spdk.sock 00:06:03.546 17:16:42 -- common/autotest_common.sh@819 -- # '[' -z 3927751 ']' 00:06:03.546 17:16:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.546 17:16:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.546 17:16:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.546 17:16:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.546 17:16:42 -- common/autotest_common.sh@10 -- # set +x 00:06:03.546 17:16:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.546 17:16:42 -- common/autotest_common.sh@852 -- # return 0 00:06:03.546 17:16:42 -- event/cpu_locks.sh@159 -- # waitforlisten 3927839 /var/tmp/spdk2.sock 00:06:03.546 17:16:42 -- common/autotest_common.sh@819 -- # '[' -z 3927839 ']' 00:06:03.546 17:16:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.546 17:16:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.546 17:16:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.546 17:16:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.546 17:16:42 -- common/autotest_common.sh@10 -- # set +x 00:06:03.805 17:16:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.805 17:16:42 -- common/autotest_common.sh@852 -- # return 0 00:06:03.805 17:16:42 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:03.805 17:16:42 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.805 17:16:42 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.805 17:16:42 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.805 00:06:03.805 real 0m2.593s 00:06:03.805 user 0m1.309s 00:06:03.805 sys 0m0.213s 00:06:03.805 17:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.805 17:16:42 -- common/autotest_common.sh@10 -- # set +x 00:06:03.805 ************************************ 00:06:03.805 END TEST locking_overlapped_coremask_via_rpc 00:06:03.805 ************************************ 00:06:03.805 17:16:42 -- event/cpu_locks.sh@174 -- # cleanup 00:06:03.805 17:16:42 -- event/cpu_locks.sh@15 -- # [[ -z 3927751 ]] 00:06:03.805 17:16:42 -- event/cpu_locks.sh@15 -- # killprocess 3927751 00:06:03.805 17:16:42 -- common/autotest_common.sh@926 -- # '[' -z 3927751 ']' 00:06:03.805 17:16:42 -- common/autotest_common.sh@930 -- # kill -0 3927751 00:06:03.805 17:16:42 -- common/autotest_common.sh@931 -- # uname 00:06:03.805 17:16:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:03.805 17:16:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3927751 00:06:04.064 17:16:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:04.064 17:16:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:04.064 17:16:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3927751' 00:06:04.064 killing process with pid 3927751 00:06:04.064 17:16:42 -- common/autotest_common.sh@945 -- # kill 3927751 00:06:04.064 17:16:42 -- common/autotest_common.sh@950 -- # wait 3927751 00:06:04.324 17:16:43 -- event/cpu_locks.sh@16 -- # [[ -z 3927839 ]] 00:06:04.324 17:16:43 -- event/cpu_locks.sh@16 -- # killprocess 3927839 00:06:04.324 17:16:43 -- common/autotest_common.sh@926 -- # '[' -z 3927839 ']' 00:06:04.324 17:16:43 -- common/autotest_common.sh@930 -- # kill -0 3927839 00:06:04.324 17:16:43 -- common/autotest_common.sh@931 -- # uname 00:06:04.324 17:16:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:04.324 17:16:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3927839 00:06:04.324 17:16:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:04.324 17:16:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:04.324 17:16:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3927839' 00:06:04.324 killing process with pid 3927839 00:06:04.324 17:16:43 -- common/autotest_common.sh@945 -- # kill 3927839 00:06:04.324 17:16:43 -- common/autotest_common.sh@950 -- # wait 3927839 00:06:04.583 17:16:43 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.583 17:16:43 -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.583 17:16:43 -- event/cpu_locks.sh@15 -- # [[ -z 3927751 ]] 00:06:04.583 17:16:43 -- event/cpu_locks.sh@15 -- # killprocess 3927751 00:06:04.583 17:16:43 -- common/autotest_common.sh@926 -- # '[' -z 3927751 ']' 00:06:04.583 17:16:43 -- common/autotest_common.sh@930 -- # kill -0 3927751 00:06:04.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3927751) - No such process 00:06:04.583 17:16:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3927751 is not found' 00:06:04.583 Process with pid 3927751 is not found 00:06:04.583 17:16:43 -- event/cpu_locks.sh@16 -- # [[ -z 3927839 ]] 00:06:04.583 17:16:43 -- event/cpu_locks.sh@16 -- # killprocess 3927839 00:06:04.583 17:16:43 -- common/autotest_common.sh@926 -- # '[' -z 3927839 ']' 00:06:04.583 17:16:43 -- common/autotest_common.sh@930 -- # kill -0 3927839 00:06:04.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3927839) - No such process 00:06:04.583 17:16:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3927839 is not found' 00:06:04.583 Process with pid 3927839 is not found 00:06:04.583 17:16:43 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.583 00:06:04.583 real 0m18.675s 00:06:04.583 user 0m34.620s 00:06:04.583 sys 0m5.305s 00:06:04.583 17:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.583 17:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.583 ************************************ 00:06:04.583 END TEST cpu_locks 00:06:04.583 ************************************ 00:06:04.583 00:06:04.583 real 0m44.628s 00:06:04.583 user 1m28.036s 00:06:04.583 sys 0m8.970s 00:06:04.583 17:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.583 17:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.583 ************************************ 00:06:04.583 END TEST event 00:06:04.583 ************************************ 00:06:04.583 17:16:43 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.583 17:16:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.583 17:16:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.583 17:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.583 ************************************ 00:06:04.583 START TEST thread 00:06:04.583 ************************************ 00:06:04.583 17:16:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.841 * Looking for test storage... 00:06:04.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:04.841 17:16:43 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.841 17:16:43 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:04.841 17:16:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.841 17:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.841 ************************************ 00:06:04.841 START TEST thread_poller_perf 00:06:04.841 ************************************ 00:06:04.841 17:16:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.841 [2024-07-12 17:16:43.645760] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:04.841 [2024-07-12 17:16:43.645822] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928387 ] 00:06:04.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.841 [2024-07-12 17:16:43.726320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.841 [2024-07-12 17:16:43.767656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.841 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.219 ====================================== 00:06:06.219 busy:2211067612 (cyc) 00:06:06.219 total_run_count: 247000 00:06:06.219 tsc_hz: 2200000000 (cyc) 00:06:06.219 ====================================== 00:06:06.219 poller_cost: 8951 (cyc), 4068 (nsec) 00:06:06.219 00:06:06.219 real 0m1.206s 00:06:06.219 user 0m1.117s 00:06:06.219 sys 0m0.082s 00:06:06.219 17:16:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.219 17:16:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.219 ************************************ 00:06:06.219 END TEST thread_poller_perf 00:06:06.219 ************************************ 00:06:06.219 17:16:44 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.219 17:16:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:06.219 17:16:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.219 17:16:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.219 ************************************ 00:06:06.219 START TEST thread_poller_perf 00:06:06.219 ************************************ 00:06:06.219 17:16:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.219 [2024-07-12 17:16:44.896120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:06.219 [2024-07-12 17:16:44.896196] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928667 ] 00:06:06.219 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.219 [2024-07-12 17:16:44.976229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.219 [2024-07-12 17:16:45.016530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.219 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.154 ====================================== 00:06:07.154 busy:2203039346 (cyc) 00:06:07.154 total_run_count: 3403000 00:06:07.154 tsc_hz: 2200000000 (cyc) 00:06:07.154 ====================================== 00:06:07.154 poller_cost: 647 (cyc), 294 (nsec) 00:06:07.154 00:06:07.154 real 0m1.203s 00:06:07.154 user 0m1.102s 00:06:07.154 sys 0m0.094s 00:06:07.155 17:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.155 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.155 ************************************ 00:06:07.155 END TEST thread_poller_perf 00:06:07.155 ************************************ 00:06:07.155 17:16:46 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.155 00:06:07.155 real 0m2.565s 00:06:07.155 user 0m2.287s 00:06:07.155 sys 0m0.281s 00:06:07.155 17:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.155 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.155 ************************************ 00:06:07.155 END TEST thread 00:06:07.155 ************************************ 00:06:07.414 17:16:46 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:07.414 17:16:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.414 17:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.414 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.414 ************************************ 00:06:07.414 START TEST accel 00:06:07.414 ************************************ 00:06:07.414 17:16:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:07.414 * Looking for test storage... 00:06:07.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:07.414 17:16:46 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:07.414 17:16:46 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:07.414 17:16:46 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.414 17:16:46 -- accel/accel.sh@59 -- # spdk_tgt_pid=3928991 00:06:07.414 17:16:46 -- accel/accel.sh@60 -- # waitforlisten 3928991 00:06:07.414 17:16:46 -- common/autotest_common.sh@819 -- # '[' -z 3928991 ']' 00:06:07.414 17:16:46 -- accel/accel.sh@58 -- # build_accel_config 00:06:07.414 17:16:46 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:07.414 17:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.414 17:16:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.414 17:16:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.414 17:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.414 17:16:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.414 17:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.414 17:16:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.414 17:16:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.414 17:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.414 17:16:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.414 17:16:46 -- accel/accel.sh@42 -- # jq -r . 00:06:07.414 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.414 [2024-07-12 17:16:46.285687] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:07.414 [2024-07-12 17:16:46.285751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928991 ] 00:06:07.414 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.414 [2024-07-12 17:16:46.367992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.674 [2024-07-12 17:16:46.410348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.674 [2024-07-12 17:16:46.410498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.241 17:16:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.241 17:16:47 -- common/autotest_common.sh@852 -- # return 0 00:06:08.241 17:16:47 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:08.241 17:16:47 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:08.241 17:16:47 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:08.241 17:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.241 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.501 17:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # IFS== 00:06:08.501 17:16:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.501 17:16:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.501 17:16:47 -- accel/accel.sh@67 -- # killprocess 3928991 00:06:08.501 17:16:47 -- common/autotest_common.sh@926 -- # '[' -z 3928991 ']' 00:06:08.501 17:16:47 -- common/autotest_common.sh@930 -- # kill -0 3928991 00:06:08.501 17:16:47 -- common/autotest_common.sh@931 -- # uname 00:06:08.501 17:16:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.501 17:16:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3928991 00:06:08.501 17:16:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.501 17:16:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.501 17:16:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3928991' 00:06:08.501 killing process with pid 3928991 00:06:08.501 17:16:47 -- common/autotest_common.sh@945 -- # kill 3928991 00:06:08.501 17:16:47 -- common/autotest_common.sh@950 -- # wait 3928991 00:06:08.761 17:16:47 -- accel/accel.sh@68 -- # trap - ERR 00:06:08.761 17:16:47 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:08.761 17:16:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:08.761 17:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.761 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.761 17:16:47 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:08.761 17:16:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:08.761 17:16:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.761 17:16:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.761 17:16:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.761 17:16:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.761 17:16:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.761 17:16:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.761 17:16:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.761 17:16:47 -- accel/accel.sh@42 -- # jq -r . 00:06:08.761 17:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.761 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.761 17:16:47 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:08.761 17:16:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:08.761 17:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.761 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.761 ************************************ 00:06:08.761 START TEST accel_missing_filename 00:06:08.761 ************************************ 00:06:08.761 17:16:47 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:08.761 17:16:47 -- common/autotest_common.sh@640 -- # local es=0 00:06:08.761 17:16:47 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:08.761 17:16:47 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:08.761 17:16:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:08.761 17:16:47 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:08.761 17:16:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:08.761 17:16:47 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:08.761 17:16:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.761 17:16:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.761 17:16:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:08.761 17:16:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.761 17:16:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.761 17:16:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.761 17:16:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.761 17:16:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.761 17:16:47 -- accel/accel.sh@42 -- # jq -r . 00:06:08.761 [2024-07-12 17:16:47.715921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:08.761 [2024-07-12 17:16:47.715979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929292 ] 00:06:09.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.020 [2024-07-12 17:16:47.794917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.020 [2024-07-12 17:16:47.836401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.020 [2024-07-12 17:16:47.880410] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.020 [2024-07-12 17:16:47.942143] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:09.279 A filename is required. 00:06:09.279 17:16:48 -- common/autotest_common.sh@643 -- # es=234 00:06:09.279 17:16:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:09.279 17:16:48 -- common/autotest_common.sh@652 -- # es=106 00:06:09.279 17:16:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:09.279 17:16:48 -- common/autotest_common.sh@660 -- # es=1 00:06:09.279 17:16:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:09.279 00:06:09.279 real 0m0.309s 00:06:09.279 user 0m0.201s 00:06:09.279 sys 0m0.127s 00:06:09.279 17:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.279 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.279 ************************************ 00:06:09.279 END TEST accel_missing_filename 00:06:09.279 ************************************ 00:06:09.279 17:16:48 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.279 17:16:48 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:09.279 17:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.279 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.279 ************************************ 00:06:09.279 START TEST accel_compress_verify 00:06:09.279 ************************************ 00:06:09.279 17:16:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.279 17:16:48 -- common/autotest_common.sh@640 -- # local es=0 00:06:09.279 17:16:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.279 17:16:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:09.279 17:16:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.279 17:16:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:09.279 17:16:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.279 17:16:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.279 17:16:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.279 17:16:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.279 17:16:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.279 17:16:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.279 17:16:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.279 17:16:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.279 17:16:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.279 17:16:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.279 17:16:48 -- accel/accel.sh@42 -- # jq -r . 00:06:09.279 [2024-07-12 17:16:48.063459] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:09.279 [2024-07-12 17:16:48.063519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929321 ] 00:06:09.279 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.279 [2024-07-12 17:16:48.131636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.279 [2024-07-12 17:16:48.172821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.279 [2024-07-12 17:16:48.217485] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.538 [2024-07-12 17:16:48.280589] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:09.538 00:06:09.538 Compression does not support the verify option, aborting. 00:06:09.538 17:16:48 -- common/autotest_common.sh@643 -- # es=161 00:06:09.538 17:16:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:09.538 17:16:48 -- common/autotest_common.sh@652 -- # es=33 00:06:09.538 17:16:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:09.538 17:16:48 -- common/autotest_common.sh@660 -- # es=1 00:06:09.538 17:16:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:09.538 00:06:09.538 real 0m0.300s 00:06:09.538 user 0m0.222s 00:06:09.538 sys 0m0.119s 00:06:09.538 17:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.538 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.538 ************************************ 00:06:09.538 END TEST accel_compress_verify 00:06:09.538 ************************************ 00:06:09.538 17:16:48 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:09.538 17:16:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:09.538 17:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.538 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.538 ************************************ 00:06:09.538 START TEST accel_wrong_workload 00:06:09.538 ************************************ 00:06:09.538 17:16:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:09.538 17:16:48 -- common/autotest_common.sh@640 -- # local es=0 00:06:09.538 17:16:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:09.538 17:16:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:09.538 17:16:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.538 17:16:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:09.538 17:16:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.538 17:16:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:09.538 17:16:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:09.538 17:16:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.538 17:16:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.538 17:16:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.538 17:16:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.538 17:16:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.538 17:16:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.538 17:16:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.538 17:16:48 -- accel/accel.sh@42 -- # jq -r . 00:06:09.538 Unsupported workload type: foobar 00:06:09.538 [2024-07-12 17:16:48.406786] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:09.538 accel_perf options: 00:06:09.538 [-h help message] 00:06:09.538 [-q queue depth per core] 00:06:09.538 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:09.538 [-T number of threads per core 00:06:09.538 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:09.538 [-t time in seconds] 00:06:09.538 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:09.538 [ dif_verify, , dif_generate, dif_generate_copy 00:06:09.538 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:09.538 [-l for compress/decompress workloads, name of uncompressed input file 00:06:09.538 [-S for crc32c workload, use this seed value (default 0) 00:06:09.538 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:09.538 [-f for fill workload, use this BYTE value (default 255) 00:06:09.538 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:09.538 [-y verify result if this switch is on] 00:06:09.538 [-a tasks to allocate per core (default: same value as -q)] 00:06:09.538 Can be used to spread operations across a wider range of memory. 00:06:09.538 17:16:48 -- common/autotest_common.sh@643 -- # es=1 00:06:09.538 17:16:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:09.538 17:16:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:09.538 17:16:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:09.538 00:06:09.538 real 0m0.030s 00:06:09.538 user 0m0.015s 00:06:09.538 sys 0m0.015s 00:06:09.538 17:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.538 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.538 ************************************ 00:06:09.538 END TEST accel_wrong_workload 00:06:09.538 ************************************ 00:06:09.538 17:16:48 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:09.538 17:16:48 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:09.538 17:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.538 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.538 ************************************ 00:06:09.538 START TEST accel_negative_buffers 00:06:09.538 ************************************ 00:06:09.538 17:16:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:09.538 17:16:48 -- common/autotest_common.sh@640 -- # local es=0 00:06:09.538 17:16:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:09.538 17:16:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:09.538 17:16:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.538 17:16:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:09.538 17:16:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.538 17:16:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:09.538 17:16:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:09.539 17:16:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.539 17:16:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.539 17:16:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.539 17:16:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.539 17:16:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.539 17:16:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.539 17:16:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.539 17:16:48 -- accel/accel.sh@42 -- # jq -r . 00:06:09.539 Error: writing output failed: Broken pipe 00:06:09.539 -x option must be non-negative. 00:06:09.539 [2024-07-12 17:16:48.471959] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:09.539 accel_perf options: 00:06:09.539 [-h help message] 00:06:09.539 [-q queue depth per core] 00:06:09.539 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:09.539 [-T number of threads per core 00:06:09.539 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:09.539 [-t time in seconds] 00:06:09.539 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:09.539 [ dif_verify, , dif_generate, dif_generate_copy 00:06:09.539 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:09.539 [-l for compress/decompress workloads, name of uncompressed input file 00:06:09.539 [-S for crc32c workload, use this seed value (default 0) 00:06:09.539 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:09.539 [-f for fill workload, use this BYTE value (default 255) 00:06:09.539 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:09.539 [-y verify result if this switch is on] 00:06:09.539 [-a tasks to allocate per core (default: same value as -q)] 00:06:09.539 Can be used to spread operations across a wider range of memory. 00:06:09.539 17:16:48 -- common/autotest_common.sh@643 -- # es=1 00:06:09.539 17:16:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:09.539 17:16:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:09.539 17:16:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:09.539 00:06:09.539 real 0m0.031s 00:06:09.539 user 0m0.089s 00:06:09.539 sys 0m0.015s 00:06:09.539 17:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.539 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.539 ************************************ 00:06:09.539 END TEST accel_negative_buffers 00:06:09.539 ************************************ 00:06:09.539 Error: writing output failed: Broken pipe 00:06:09.798 17:16:48 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:09.798 17:16:48 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:09.798 17:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.798 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.798 ************************************ 00:06:09.798 START TEST accel_crc32c 00:06:09.798 ************************************ 00:06:09.798 17:16:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:09.798 17:16:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.798 17:16:48 -- accel/accel.sh@17 -- # local accel_module 00:06:09.798 17:16:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:09.798 17:16:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:09.798 17:16:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.798 17:16:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.798 17:16:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.798 17:16:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.798 17:16:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.798 17:16:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.798 17:16:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.798 17:16:48 -- accel/accel.sh@42 -- # jq -r . 00:06:09.798 [2024-07-12 17:16:48.542166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:09.798 [2024-07-12 17:16:48.542234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929584 ] 00:06:09.798 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.798 [2024-07-12 17:16:48.622411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.798 [2024-07-12 17:16:48.663672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.172 17:16:49 -- accel/accel.sh@18 -- # out=' 00:06:11.172 SPDK Configuration: 00:06:11.172 Core mask: 0x1 00:06:11.172 00:06:11.172 Accel Perf Configuration: 00:06:11.172 Workload Type: crc32c 00:06:11.172 CRC-32C seed: 32 00:06:11.172 Transfer size: 4096 bytes 00:06:11.172 Vector count 1 00:06:11.172 Module: software 00:06:11.172 Queue depth: 32 00:06:11.172 Allocate depth: 32 00:06:11.172 # threads/core: 1 00:06:11.172 Run time: 1 seconds 00:06:11.172 Verify: Yes 00:06:11.172 00:06:11.172 Running for 1 seconds... 00:06:11.172 00:06:11.172 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:11.172 ------------------------------------------------------------------------------------ 00:06:11.172 0,0 357216/s 1395 MiB/s 0 0 00:06:11.172 ==================================================================================== 00:06:11.172 Total 357216/s 1395 MiB/s 0 0' 00:06:11.172 17:16:49 -- accel/accel.sh@20 -- # IFS=: 00:06:11.172 17:16:49 -- accel/accel.sh@20 -- # read -r var val 00:06:11.172 17:16:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:11.172 17:16:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:11.172 17:16:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.172 17:16:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.173 17:16:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.173 17:16:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.173 17:16:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.173 17:16:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.173 17:16:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.173 17:16:49 -- accel/accel.sh@42 -- # jq -r . 00:06:11.173 [2024-07-12 17:16:49.865964] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:11.173 [2024-07-12 17:16:49.866042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929781 ] 00:06:11.173 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.173 [2024-07-12 17:16:49.947476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.173 [2024-07-12 17:16:49.987638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val= 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val= 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val=0x1 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val= 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val= 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val=crc32c 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val=32 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val= 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val=software 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val=32 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val=32 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val=1 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val=Yes 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val= 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:11.173 17:16:50 -- accel/accel.sh@21 -- # val= 00:06:11.173 17:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # IFS=: 00:06:11.173 17:16:50 -- accel/accel.sh@20 -- # read -r var val 00:06:12.548 17:16:51 -- accel/accel.sh@21 -- # val= 00:06:12.548 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:06:12.548 17:16:51 -- accel/accel.sh@21 -- # val= 00:06:12.548 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:06:12.548 17:16:51 -- accel/accel.sh@21 -- # val= 00:06:12.548 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:06:12.548 17:16:51 -- accel/accel.sh@21 -- # val= 00:06:12.548 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:06:12.548 17:16:51 -- accel/accel.sh@21 -- # val= 00:06:12.548 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:06:12.548 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:06:12.548 17:16:51 -- accel/accel.sh@21 -- # val= 00:06:12.549 17:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.549 17:16:51 -- accel/accel.sh@20 -- # IFS=: 00:06:12.549 17:16:51 -- accel/accel.sh@20 -- # read -r var val 00:06:12.549 17:16:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.549 17:16:51 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:12.549 17:16:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.549 00:06:12.549 real 0m2.654s 00:06:12.549 user 0m2.384s 00:06:12.549 sys 0m0.277s 00:06:12.549 17:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.549 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:12.549 ************************************ 00:06:12.549 END TEST accel_crc32c 00:06:12.549 ************************************ 00:06:12.549 17:16:51 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:12.549 17:16:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:12.549 17:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.549 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:12.549 ************************************ 00:06:12.549 START TEST accel_crc32c_C2 00:06:12.549 ************************************ 00:06:12.549 17:16:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:12.549 17:16:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.549 17:16:51 -- accel/accel.sh@17 -- # local accel_module 00:06:12.549 17:16:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:12.549 17:16:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:12.549 17:16:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.549 17:16:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.549 17:16:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.549 17:16:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.549 17:16:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.549 17:16:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.549 17:16:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.549 17:16:51 -- accel/accel.sh@42 -- # jq -r . 00:06:12.549 [2024-07-12 17:16:51.218977] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:12.549 [2024-07-12 17:16:51.219021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930025 ] 00:06:12.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.549 [2024-07-12 17:16:51.287393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.549 [2024-07-12 17:16:51.328055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.924 17:16:52 -- accel/accel.sh@18 -- # out=' 00:06:13.924 SPDK Configuration: 00:06:13.924 Core mask: 0x1 00:06:13.924 00:06:13.924 Accel Perf Configuration: 00:06:13.924 Workload Type: crc32c 00:06:13.924 CRC-32C seed: 0 00:06:13.924 Transfer size: 4096 bytes 00:06:13.924 Vector count 2 00:06:13.924 Module: software 00:06:13.924 Queue depth: 32 00:06:13.924 Allocate depth: 32 00:06:13.924 # threads/core: 1 00:06:13.924 Run time: 1 seconds 00:06:13.924 Verify: Yes 00:06:13.924 00:06:13.924 Running for 1 seconds... 00:06:13.924 00:06:13.924 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.924 ------------------------------------------------------------------------------------ 00:06:13.924 0,0 282208/s 2204 MiB/s 0 0 00:06:13.924 ==================================================================================== 00:06:13.924 Total 282208/s 1102 MiB/s 0 0' 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:13.924 17:16:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:13.924 17:16:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.924 17:16:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.924 17:16:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.924 17:16:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.924 17:16:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.924 17:16:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.924 17:16:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.924 17:16:52 -- accel/accel.sh@42 -- # jq -r . 00:06:13.924 [2024-07-12 17:16:52.528733] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:13.924 [2024-07-12 17:16:52.528795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930208 ] 00:06:13.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.924 [2024-07-12 17:16:52.609212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.924 [2024-07-12 17:16:52.649196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val= 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val= 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val=0x1 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val= 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val= 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val=crc32c 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val=0 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val= 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val=software 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val=32 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val=32 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.924 17:16:52 -- accel/accel.sh@21 -- # val=1 00:06:13.924 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.924 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.925 17:16:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.925 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.925 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.925 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.925 17:16:52 -- accel/accel.sh@21 -- # val=Yes 00:06:13.925 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.925 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.925 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.925 17:16:52 -- accel/accel.sh@21 -- # val= 00:06:13.925 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.925 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.925 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:13.925 17:16:52 -- accel/accel.sh@21 -- # val= 00:06:13.925 17:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.925 17:16:52 -- accel/accel.sh@20 -- # IFS=: 00:06:13.925 17:16:52 -- accel/accel.sh@20 -- # read -r var val 00:06:14.859 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:14.859 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.859 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:14.859 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:14.859 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:14.859 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.859 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:14.859 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:14.859 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:15.117 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.117 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.117 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.117 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:15.117 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.117 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.117 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.117 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:15.117 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.117 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.117 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.117 17:16:53 -- accel/accel.sh@21 -- # val= 00:06:15.117 17:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.117 17:16:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.117 17:16:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.117 17:16:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.117 17:16:53 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:15.117 17:16:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.117 00:06:15.117 real 0m2.623s 00:06:15.117 user 0m2.370s 00:06:15.117 sys 0m0.259s 00:06:15.117 17:16:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.117 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 ************************************ 00:06:15.117 END TEST accel_crc32c_C2 00:06:15.117 ************************************ 00:06:15.117 17:16:53 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:15.117 17:16:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:15.117 17:16:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.117 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 ************************************ 00:06:15.117 START TEST accel_copy 00:06:15.117 ************************************ 00:06:15.117 17:16:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:15.117 17:16:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.117 17:16:53 -- accel/accel.sh@17 -- # local accel_module 00:06:15.117 17:16:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:15.117 17:16:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:15.117 17:16:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.117 17:16:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.117 17:16:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.117 17:16:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.117 17:16:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.117 17:16:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.117 17:16:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.117 17:16:53 -- accel/accel.sh@42 -- # jq -r . 00:06:15.117 [2024-07-12 17:16:53.879388] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:15.117 [2024-07-12 17:16:53.879450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930474 ] 00:06:15.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.117 [2024-07-12 17:16:53.958262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.117 [2024-07-12 17:16:53.999089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.493 17:16:55 -- accel/accel.sh@18 -- # out=' 00:06:16.493 SPDK Configuration: 00:06:16.493 Core mask: 0x1 00:06:16.493 00:06:16.493 Accel Perf Configuration: 00:06:16.493 Workload Type: copy 00:06:16.493 Transfer size: 4096 bytes 00:06:16.493 Vector count 1 00:06:16.493 Module: software 00:06:16.493 Queue depth: 32 00:06:16.493 Allocate depth: 32 00:06:16.493 # threads/core: 1 00:06:16.493 Run time: 1 seconds 00:06:16.493 Verify: Yes 00:06:16.493 00:06:16.493 Running for 1 seconds... 00:06:16.493 00:06:16.493 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.493 ------------------------------------------------------------------------------------ 00:06:16.493 0,0 264704/s 1034 MiB/s 0 0 00:06:16.493 ==================================================================================== 00:06:16.493 Total 264704/s 1034 MiB/s 0 0' 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 17:16:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:16.493 17:16:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:16.493 17:16:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.493 17:16:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.493 17:16:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.493 17:16:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.493 17:16:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.493 17:16:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.493 17:16:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.493 17:16:55 -- accel/accel.sh@42 -- # jq -r . 00:06:16.493 [2024-07-12 17:16:55.200565] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:16.493 [2024-07-12 17:16:55.200643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930746 ] 00:06:16.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.493 [2024-07-12 17:16:55.281768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.493 [2024-07-12 17:16:55.321553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.493 17:16:55 -- accel/accel.sh@21 -- # val= 00:06:16.493 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 17:16:55 -- accel/accel.sh@21 -- # val= 00:06:16.493 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 17:16:55 -- accel/accel.sh@21 -- # val=0x1 00:06:16.493 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 17:16:55 -- accel/accel.sh@21 -- # val= 00:06:16.493 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 17:16:55 -- accel/accel.sh@21 -- # val= 00:06:16.493 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 17:16:55 -- accel/accel.sh@21 -- # val=copy 00:06:16.493 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 17:16:55 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 17:16:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.493 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 17:16:55 -- accel/accel.sh@21 -- # val= 00:06:16.493 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.493 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.494 17:16:55 -- accel/accel.sh@21 -- # val=software 00:06:16.494 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.494 17:16:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.494 17:16:55 -- accel/accel.sh@21 -- # val=32 00:06:16.494 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.494 17:16:55 -- accel/accel.sh@21 -- # val=32 00:06:16.494 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.494 17:16:55 -- accel/accel.sh@21 -- # val=1 00:06:16.494 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.494 17:16:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.494 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.494 17:16:55 -- accel/accel.sh@21 -- # val=Yes 00:06:16.494 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.494 17:16:55 -- accel/accel.sh@21 -- # val= 00:06:16.494 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:16.494 17:16:55 -- accel/accel.sh@21 -- # val= 00:06:16.494 17:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # IFS=: 00:06:16.494 17:16:55 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:17.866 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:17.866 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:17.866 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:17.866 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:17.866 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 17:16:56 -- accel/accel.sh@21 -- # val= 00:06:17.866 17:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 17:16:56 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 17:16:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.866 17:16:56 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:17.866 17:16:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.866 00:06:17.866 real 0m2.639s 00:06:17.866 user 0m2.373s 00:06:17.866 sys 0m0.271s 00:06:17.866 17:16:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.867 17:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:17.867 ************************************ 00:06:17.867 END TEST accel_copy 00:06:17.867 ************************************ 00:06:17.867 17:16:56 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.867 17:16:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:17.867 17:16:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.867 17:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:17.867 ************************************ 00:06:17.867 START TEST accel_fill 00:06:17.867 ************************************ 00:06:17.867 17:16:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.867 17:16:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.867 17:16:56 -- accel/accel.sh@17 -- # local accel_module 00:06:17.867 17:16:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.867 17:16:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.867 17:16:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.867 17:16:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.867 17:16:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.867 17:16:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.867 17:16:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.867 17:16:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.867 17:16:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.867 17:16:56 -- accel/accel.sh@42 -- # jq -r . 00:06:17.867 [2024-07-12 17:16:56.548113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:17.867 [2024-07-12 17:16:56.548155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931028 ] 00:06:17.867 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.867 [2024-07-12 17:16:56.617203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.867 [2024-07-12 17:16:56.657739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.239 17:16:57 -- accel/accel.sh@18 -- # out=' 00:06:19.239 SPDK Configuration: 00:06:19.239 Core mask: 0x1 00:06:19.239 00:06:19.239 Accel Perf Configuration: 00:06:19.239 Workload Type: fill 00:06:19.239 Fill pattern: 0x80 00:06:19.239 Transfer size: 4096 bytes 00:06:19.239 Vector count 1 00:06:19.239 Module: software 00:06:19.239 Queue depth: 64 00:06:19.239 Allocate depth: 64 00:06:19.239 # threads/core: 1 00:06:19.239 Run time: 1 seconds 00:06:19.239 Verify: Yes 00:06:19.239 00:06:19.239 Running for 1 seconds... 00:06:19.239 00:06:19.239 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.239 ------------------------------------------------------------------------------------ 00:06:19.239 0,0 408512/s 1595 MiB/s 0 0 00:06:19.239 ==================================================================================== 00:06:19.239 Total 408512/s 1595 MiB/s 0 0' 00:06:19.239 17:16:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.239 17:16:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.239 17:16:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.239 17:16:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.239 17:16:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.239 17:16:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.239 17:16:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.239 17:16:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.239 17:16:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.240 17:16:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.240 17:16:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.240 17:16:57 -- accel/accel.sh@42 -- # jq -r . 00:06:19.240 [2024-07-12 17:16:57.861032] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:19.240 [2024-07-12 17:16:57.861108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931292 ] 00:06:19.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.240 [2024-07-12 17:16:57.941556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.240 [2024-07-12 17:16:57.981266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val=0x1 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val=fill 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val=0x80 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val=software 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val=64 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val=64 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val=1 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val=Yes 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:19.240 17:16:58 -- accel/accel.sh@21 -- # val= 00:06:19.240 17:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # IFS=: 00:06:19.240 17:16:58 -- accel/accel.sh@20 -- # read -r var val 00:06:20.615 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:20.615 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:20.615 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:20.615 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:20.615 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:20.615 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:20.615 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:20.615 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:20.615 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:20.615 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:20.615 17:16:59 -- accel/accel.sh@21 -- # val= 00:06:20.615 17:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # IFS=: 00:06:20.615 17:16:59 -- accel/accel.sh@20 -- # read -r var val 00:06:20.615 17:16:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.615 17:16:59 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:20.615 17:16:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.615 00:06:20.615 real 0m2.624s 00:06:20.615 user 0m2.388s 00:06:20.615 sys 0m0.240s 00:06:20.615 17:16:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.615 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:20.615 ************************************ 00:06:20.615 END TEST accel_fill 00:06:20.615 ************************************ 00:06:20.615 17:16:59 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:20.615 17:16:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:20.615 17:16:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.615 17:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:20.615 ************************************ 00:06:20.615 START TEST accel_copy_crc32c 00:06:20.616 ************************************ 00:06:20.616 17:16:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:20.616 17:16:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.616 17:16:59 -- accel/accel.sh@17 -- # local accel_module 00:06:20.616 17:16:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:20.616 17:16:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:20.616 17:16:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.616 17:16:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.616 17:16:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.616 17:16:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.616 17:16:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.616 17:16:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.616 17:16:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.616 17:16:59 -- accel/accel.sh@42 -- # jq -r . 00:06:20.616 [2024-07-12 17:16:59.226565] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:20.616 [2024-07-12 17:16:59.226633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931579 ] 00:06:20.616 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.616 [2024-07-12 17:16:59.306802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.616 [2024-07-12 17:16:59.348776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.996 17:17:00 -- accel/accel.sh@18 -- # out=' 00:06:21.996 SPDK Configuration: 00:06:21.996 Core mask: 0x1 00:06:21.996 00:06:21.996 Accel Perf Configuration: 00:06:21.996 Workload Type: copy_crc32c 00:06:21.996 CRC-32C seed: 0 00:06:21.996 Vector size: 4096 bytes 00:06:21.996 Transfer size: 4096 bytes 00:06:21.996 Vector count 1 00:06:21.996 Module: software 00:06:21.996 Queue depth: 32 00:06:21.996 Allocate depth: 32 00:06:21.996 # threads/core: 1 00:06:21.996 Run time: 1 seconds 00:06:21.996 Verify: Yes 00:06:21.996 00:06:21.996 Running for 1 seconds... 00:06:21.996 00:06:21.996 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.996 ------------------------------------------------------------------------------------ 00:06:21.996 0,0 201376/s 786 MiB/s 0 0 00:06:21.996 ==================================================================================== 00:06:21.996 Total 201376/s 786 MiB/s 0 0' 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:21.996 17:17:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:21.996 17:17:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.996 17:17:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.996 17:17:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.996 17:17:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.996 17:17:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.996 17:17:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.996 17:17:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.996 17:17:00 -- accel/accel.sh@42 -- # jq -r . 00:06:21.996 [2024-07-12 17:17:00.547926] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:21.996 [2024-07-12 17:17:00.547985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931843 ] 00:06:21.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.996 [2024-07-12 17:17:00.628479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.996 [2024-07-12 17:17:00.668900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val= 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val= 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val=0x1 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val= 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val= 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val=0 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val= 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val=software 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val=32 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val=32 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val=1 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val=Yes 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val= 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:21.996 17:17:00 -- accel/accel.sh@21 -- # val= 00:06:21.996 17:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # IFS=: 00:06:21.996 17:17:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.930 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:22.930 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:22.930 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:22.930 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:22.930 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:22.930 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:22.930 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:22.930 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:22.930 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:22.930 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:22.930 17:17:01 -- accel/accel.sh@21 -- # val= 00:06:22.930 17:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # IFS=: 00:06:22.930 17:17:01 -- accel/accel.sh@20 -- # read -r var val 00:06:22.930 17:17:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.930 17:17:01 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:22.930 17:17:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.930 00:06:22.930 real 0m2.647s 00:06:22.930 user 0m2.391s 00:06:22.930 sys 0m0.261s 00:06:22.930 17:17:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.930 17:17:01 -- common/autotest_common.sh@10 -- # set +x 00:06:22.930 ************************************ 00:06:22.930 END TEST accel_copy_crc32c 00:06:22.930 ************************************ 00:06:22.930 17:17:01 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.930 17:17:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:22.930 17:17:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.930 17:17:01 -- common/autotest_common.sh@10 -- # set +x 00:06:22.930 ************************************ 00:06:22.930 START TEST accel_copy_crc32c_C2 00:06:22.930 ************************************ 00:06:22.930 17:17:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.930 17:17:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.930 17:17:01 -- accel/accel.sh@17 -- # local accel_module 00:06:22.930 17:17:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:22.930 17:17:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:22.930 17:17:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.930 17:17:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.930 17:17:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.930 17:17:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.931 17:17:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.931 17:17:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.931 17:17:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.931 17:17:01 -- accel/accel.sh@42 -- # jq -r . 00:06:23.189 [2024-07-12 17:17:01.915051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:23.189 [2024-07-12 17:17:01.915133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932127 ] 00:06:23.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.189 [2024-07-12 17:17:01.998022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.189 [2024-07-12 17:17:02.038159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.562 17:17:03 -- accel/accel.sh@18 -- # out=' 00:06:24.562 SPDK Configuration: 00:06:24.562 Core mask: 0x1 00:06:24.562 00:06:24.562 Accel Perf Configuration: 00:06:24.562 Workload Type: copy_crc32c 00:06:24.562 CRC-32C seed: 0 00:06:24.562 Vector size: 4096 bytes 00:06:24.562 Transfer size: 8192 bytes 00:06:24.562 Vector count 2 00:06:24.562 Module: software 00:06:24.562 Queue depth: 32 00:06:24.562 Allocate depth: 32 00:06:24.562 # threads/core: 1 00:06:24.562 Run time: 1 seconds 00:06:24.562 Verify: Yes 00:06:24.562 00:06:24.562 Running for 1 seconds... 00:06:24.562 00:06:24.562 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.562 ------------------------------------------------------------------------------------ 00:06:24.562 0,0 146400/s 1143 MiB/s 0 0 00:06:24.562 ==================================================================================== 00:06:24.562 Total 146400/s 571 MiB/s 0 0' 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:24.562 17:17:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:24.562 17:17:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.562 17:17:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.562 17:17:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.562 17:17:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.562 17:17:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.562 17:17:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.562 17:17:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.562 17:17:03 -- accel/accel.sh@42 -- # jq -r . 00:06:24.562 [2024-07-12 17:17:03.238312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:24.562 [2024-07-12 17:17:03.238394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932390 ] 00:06:24.562 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.562 [2024-07-12 17:17:03.319475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.562 [2024-07-12 17:17:03.359651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val=0x1 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val=0 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val=software 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val=32 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val=32 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val=1 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val=Yes 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:24.562 17:17:03 -- accel/accel.sh@21 -- # val= 00:06:24.562 17:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # IFS=: 00:06:24.562 17:17:03 -- accel/accel.sh@20 -- # read -r var val 00:06:25.938 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:25.938 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:25.938 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:25.938 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:25.938 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:25.938 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:25.938 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:25.938 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:25.938 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:25.938 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:25.938 17:17:04 -- accel/accel.sh@21 -- # val= 00:06:25.938 17:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # IFS=: 00:06:25.938 17:17:04 -- accel/accel.sh@20 -- # read -r var val 00:06:25.938 17:17:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.938 17:17:04 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:25.938 17:17:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.938 00:06:25.938 real 0m2.650s 00:06:25.938 user 0m2.383s 00:06:25.938 sys 0m0.274s 00:06:25.938 17:17:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.938 17:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:25.938 ************************************ 00:06:25.938 END TEST accel_copy_crc32c_C2 00:06:25.938 ************************************ 00:06:25.938 17:17:04 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:25.938 17:17:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:25.938 17:17:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.938 17:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:25.938 ************************************ 00:06:25.938 START TEST accel_dualcast 00:06:25.938 ************************************ 00:06:25.938 17:17:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:25.938 17:17:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.938 17:17:04 -- accel/accel.sh@17 -- # local accel_module 00:06:25.938 17:17:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:25.938 17:17:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.938 17:17:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:25.938 17:17:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.938 17:17:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.938 17:17:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.938 17:17:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.938 17:17:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.938 17:17:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.938 17:17:04 -- accel/accel.sh@42 -- # jq -r . 00:06:25.938 [2024-07-12 17:17:04.599953] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:25.938 [2024-07-12 17:17:04.600017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932677 ] 00:06:25.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.938 [2024-07-12 17:17:04.679633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.938 [2024-07-12 17:17:04.720085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.315 17:17:05 -- accel/accel.sh@18 -- # out=' 00:06:27.315 SPDK Configuration: 00:06:27.315 Core mask: 0x1 00:06:27.315 00:06:27.315 Accel Perf Configuration: 00:06:27.315 Workload Type: dualcast 00:06:27.315 Transfer size: 4096 bytes 00:06:27.315 Vector count 1 00:06:27.315 Module: software 00:06:27.315 Queue depth: 32 00:06:27.315 Allocate depth: 32 00:06:27.315 # threads/core: 1 00:06:27.315 Run time: 1 seconds 00:06:27.315 Verify: Yes 00:06:27.315 00:06:27.315 Running for 1 seconds... 00:06:27.315 00:06:27.315 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.315 ------------------------------------------------------------------------------------ 00:06:27.315 0,0 314144/s 1227 MiB/s 0 0 00:06:27.315 ==================================================================================== 00:06:27.315 Total 314144/s 1227 MiB/s 0 0' 00:06:27.315 17:17:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.315 17:17:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.315 17:17:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:27.315 17:17:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:27.315 17:17:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.315 17:17:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.315 17:17:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.315 17:17:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.315 17:17:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.315 17:17:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.315 17:17:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.315 17:17:05 -- accel/accel.sh@42 -- # jq -r . 00:06:27.315 [2024-07-12 17:17:05.921063] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:27.315 [2024-07-12 17:17:05.921139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932941 ] 00:06:27.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.315 [2024-07-12 17:17:06.001969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.315 [2024-07-12 17:17:06.041726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.315 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:27.315 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.315 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:27.315 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.315 17:17:06 -- accel/accel.sh@21 -- # val=0x1 00:06:27.315 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.315 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:27.315 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.315 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:27.315 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.315 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.315 17:17:06 -- accel/accel.sh@21 -- # val=dualcast 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val=software 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val=32 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val=32 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val=1 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val=Yes 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:27.316 17:17:06 -- accel/accel.sh@21 -- # val= 00:06:27.316 17:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # IFS=: 00:06:27.316 17:17:06 -- accel/accel.sh@20 -- # read -r var val 00:06:28.251 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:28.251 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.251 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:28.251 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:28.251 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:28.251 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.251 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:28.251 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:28.251 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:28.251 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.251 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:28.251 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:28.251 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:28.251 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.251 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:28.251 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:28.510 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:28.510 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.510 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:28.510 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:28.510 17:17:07 -- accel/accel.sh@21 -- # val= 00:06:28.510 17:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.510 17:17:07 -- accel/accel.sh@20 -- # IFS=: 00:06:28.510 17:17:07 -- accel/accel.sh@20 -- # read -r var val 00:06:28.510 17:17:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.510 17:17:07 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:28.510 17:17:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.510 00:06:28.510 real 0m2.646s 00:06:28.510 user 0m2.386s 00:06:28.510 sys 0m0.265s 00:06:28.510 17:17:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.510 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:28.510 ************************************ 00:06:28.510 END TEST accel_dualcast 00:06:28.510 ************************************ 00:06:28.510 17:17:07 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:28.510 17:17:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:28.510 17:17:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.510 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:28.510 ************************************ 00:06:28.510 START TEST accel_compare 00:06:28.510 ************************************ 00:06:28.510 17:17:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:28.510 17:17:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.510 17:17:07 -- accel/accel.sh@17 -- # local accel_module 00:06:28.510 17:17:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:28.510 17:17:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:28.510 17:17:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.510 17:17:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.510 17:17:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.510 17:17:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.510 17:17:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.510 17:17:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.510 17:17:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.510 17:17:07 -- accel/accel.sh@42 -- # jq -r . 00:06:28.510 [2024-07-12 17:17:07.282293] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:28.510 [2024-07-12 17:17:07.282350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933223 ] 00:06:28.510 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.510 [2024-07-12 17:17:07.362380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.510 [2024-07-12 17:17:07.403011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.929 17:17:08 -- accel/accel.sh@18 -- # out=' 00:06:29.929 SPDK Configuration: 00:06:29.929 Core mask: 0x1 00:06:29.929 00:06:29.929 Accel Perf Configuration: 00:06:29.929 Workload Type: compare 00:06:29.929 Transfer size: 4096 bytes 00:06:29.929 Vector count 1 00:06:29.929 Module: software 00:06:29.929 Queue depth: 32 00:06:29.929 Allocate depth: 32 00:06:29.929 # threads/core: 1 00:06:29.929 Run time: 1 seconds 00:06:29.929 Verify: Yes 00:06:29.929 00:06:29.929 Running for 1 seconds... 00:06:29.929 00:06:29.929 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.929 ------------------------------------------------------------------------------------ 00:06:29.929 0,0 379264/s 1481 MiB/s 0 0 00:06:29.929 ==================================================================================== 00:06:29.929 Total 379264/s 1481 MiB/s 0 0' 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:29.929 17:17:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:29.929 17:17:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.929 17:17:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.929 17:17:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.929 17:17:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.929 17:17:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.929 17:17:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.929 17:17:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.929 17:17:08 -- accel/accel.sh@42 -- # jq -r . 00:06:29.929 [2024-07-12 17:17:08.600512] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:29.929 [2024-07-12 17:17:08.600573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933424 ] 00:06:29.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.929 [2024-07-12 17:17:08.680591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.929 [2024-07-12 17:17:08.720050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val= 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val= 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val=0x1 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val= 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val= 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val=compare 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val= 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val=software 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.929 17:17:08 -- accel/accel.sh@21 -- # val=32 00:06:29.929 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.929 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.930 17:17:08 -- accel/accel.sh@21 -- # val=32 00:06:29.930 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.930 17:17:08 -- accel/accel.sh@21 -- # val=1 00:06:29.930 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.930 17:17:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.930 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.930 17:17:08 -- accel/accel.sh@21 -- # val=Yes 00:06:29.930 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.930 17:17:08 -- accel/accel.sh@21 -- # val= 00:06:29.930 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:29.930 17:17:08 -- accel/accel.sh@21 -- # val= 00:06:29.930 17:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # IFS=: 00:06:29.930 17:17:08 -- accel/accel.sh@20 -- # read -r var val 00:06:31.325 17:17:09 -- accel/accel.sh@21 -- # val= 00:06:31.325 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.325 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.325 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.325 17:17:09 -- accel/accel.sh@21 -- # val= 00:06:31.325 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.325 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.325 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.325 17:17:09 -- accel/accel.sh@21 -- # val= 00:06:31.326 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.326 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.326 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.326 17:17:09 -- accel/accel.sh@21 -- # val= 00:06:31.326 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.326 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.326 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.326 17:17:09 -- accel/accel.sh@21 -- # val= 00:06:31.326 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.326 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.326 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.326 17:17:09 -- accel/accel.sh@21 -- # val= 00:06:31.326 17:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.326 17:17:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.326 17:17:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.326 17:17:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.326 17:17:09 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:31.326 17:17:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.326 00:06:31.326 real 0m2.643s 00:06:31.326 user 0m2.386s 00:06:31.326 sys 0m0.263s 00:06:31.326 17:17:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.326 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:06:31.326 ************************************ 00:06:31.326 END TEST accel_compare 00:06:31.326 ************************************ 00:06:31.326 17:17:09 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:31.326 17:17:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:31.326 17:17:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.326 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:06:31.326 ************************************ 00:06:31.326 START TEST accel_xor 00:06:31.326 ************************************ 00:06:31.326 17:17:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:31.326 17:17:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.326 17:17:09 -- accel/accel.sh@17 -- # local accel_module 00:06:31.326 17:17:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:31.326 17:17:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:31.326 17:17:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.326 17:17:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.326 17:17:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.326 17:17:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.326 17:17:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.326 17:17:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.326 17:17:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.326 17:17:09 -- accel/accel.sh@42 -- # jq -r . 00:06:31.326 [2024-07-12 17:17:09.964139] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:31.326 [2024-07-12 17:17:09.964214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933664 ] 00:06:31.326 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.326 [2024-07-12 17:17:10.047237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.326 [2024-07-12 17:17:10.090168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.702 17:17:11 -- accel/accel.sh@18 -- # out=' 00:06:32.702 SPDK Configuration: 00:06:32.702 Core mask: 0x1 00:06:32.702 00:06:32.702 Accel Perf Configuration: 00:06:32.702 Workload Type: xor 00:06:32.702 Source buffers: 2 00:06:32.702 Transfer size: 4096 bytes 00:06:32.702 Vector count 1 00:06:32.702 Module: software 00:06:32.702 Queue depth: 32 00:06:32.702 Allocate depth: 32 00:06:32.702 # threads/core: 1 00:06:32.702 Run time: 1 seconds 00:06:32.702 Verify: Yes 00:06:32.702 00:06:32.702 Running for 1 seconds... 00:06:32.702 00:06:32.702 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.702 ------------------------------------------------------------------------------------ 00:06:32.702 0,0 314272/s 1227 MiB/s 0 0 00:06:32.702 ==================================================================================== 00:06:32.702 Total 314272/s 1227 MiB/s 0 0' 00:06:32.702 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.702 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.702 17:17:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:32.702 17:17:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:32.702 17:17:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.702 17:17:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.702 17:17:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.702 17:17:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.702 17:17:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.702 17:17:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.702 17:17:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.702 17:17:11 -- accel/accel.sh@42 -- # jq -r . 00:06:32.702 [2024-07-12 17:17:11.291214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:32.702 [2024-07-12 17:17:11.291298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933863 ] 00:06:32.702 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.702 [2024-07-12 17:17:11.372276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.702 [2024-07-12 17:17:11.412332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.702 17:17:11 -- accel/accel.sh@21 -- # val= 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val= 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val=0x1 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val= 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val= 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val=xor 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val=2 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val= 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val=software 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val=32 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val=32 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val=1 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val=Yes 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val= 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:32.703 17:17:11 -- accel/accel.sh@21 -- # val= 00:06:32.703 17:17:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # IFS=: 00:06:32.703 17:17:11 -- accel/accel.sh@20 -- # read -r var val 00:06:33.640 17:17:12 -- accel/accel.sh@21 -- # val= 00:06:33.640 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:06:33.640 17:17:12 -- accel/accel.sh@21 -- # val= 00:06:33.640 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:06:33.640 17:17:12 -- accel/accel.sh@21 -- # val= 00:06:33.640 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:06:33.640 17:17:12 -- accel/accel.sh@21 -- # val= 00:06:33.640 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:06:33.640 17:17:12 -- accel/accel.sh@21 -- # val= 00:06:33.640 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:06:33.640 17:17:12 -- accel/accel.sh@21 -- # val= 00:06:33.640 17:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # IFS=: 00:06:33.640 17:17:12 -- accel/accel.sh@20 -- # read -r var val 00:06:33.640 17:17:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.640 17:17:12 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:33.640 17:17:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.640 00:06:33.640 real 0m2.654s 00:06:33.640 user 0m2.378s 00:06:33.640 sys 0m0.283s 00:06:33.640 17:17:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.640 17:17:12 -- common/autotest_common.sh@10 -- # set +x 00:06:33.640 ************************************ 00:06:33.640 END TEST accel_xor 00:06:33.640 ************************************ 00:06:33.899 17:17:12 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:33.899 17:17:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:33.899 17:17:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.899 17:17:12 -- common/autotest_common.sh@10 -- # set +x 00:06:33.899 ************************************ 00:06:33.899 START TEST accel_xor 00:06:33.899 ************************************ 00:06:33.899 17:17:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:33.899 17:17:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.899 17:17:12 -- accel/accel.sh@17 -- # local accel_module 00:06:33.899 17:17:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:33.899 17:17:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.899 17:17:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:33.899 17:17:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.899 17:17:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.899 17:17:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.899 17:17:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.899 17:17:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.899 17:17:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.899 17:17:12 -- accel/accel.sh@42 -- # jq -r . 00:06:33.899 [2024-07-12 17:17:12.655419] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:33.899 [2024-07-12 17:17:12.655485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934110 ] 00:06:33.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.899 [2024-07-12 17:17:12.734706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.899 [2024-07-12 17:17:12.774873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.276 17:17:13 -- accel/accel.sh@18 -- # out=' 00:06:35.276 SPDK Configuration: 00:06:35.276 Core mask: 0x1 00:06:35.276 00:06:35.276 Accel Perf Configuration: 00:06:35.276 Workload Type: xor 00:06:35.276 Source buffers: 3 00:06:35.276 Transfer size: 4096 bytes 00:06:35.276 Vector count 1 00:06:35.276 Module: software 00:06:35.276 Queue depth: 32 00:06:35.276 Allocate depth: 32 00:06:35.276 # threads/core: 1 00:06:35.276 Run time: 1 seconds 00:06:35.276 Verify: Yes 00:06:35.276 00:06:35.276 Running for 1 seconds... 00:06:35.276 00:06:35.276 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.276 ------------------------------------------------------------------------------------ 00:06:35.276 0,0 295584/s 1154 MiB/s 0 0 00:06:35.276 ==================================================================================== 00:06:35.276 Total 295584/s 1154 MiB/s 0 0' 00:06:35.276 17:17:13 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:13 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:35.276 17:17:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:35.276 17:17:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.276 17:17:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.276 17:17:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.276 17:17:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.276 17:17:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.276 17:17:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.276 17:17:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.276 17:17:13 -- accel/accel.sh@42 -- # jq -r . 00:06:35.276 [2024-07-12 17:17:13.976505] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:35.276 [2024-07-12 17:17:13.976580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934352 ] 00:06:35.276 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.276 [2024-07-12 17:17:14.058131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.276 [2024-07-12 17:17:14.097623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val= 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val= 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val=0x1 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val= 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val= 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val=xor 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val=3 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val= 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val=software 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val=32 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val=32 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val=1 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val=Yes 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val= 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:35.276 17:17:14 -- accel/accel.sh@21 -- # val= 00:06:35.276 17:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # IFS=: 00:06:35.276 17:17:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.655 17:17:15 -- accel/accel.sh@21 -- # val= 00:06:36.655 17:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # IFS=: 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # read -r var val 00:06:36.655 17:17:15 -- accel/accel.sh@21 -- # val= 00:06:36.655 17:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # IFS=: 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # read -r var val 00:06:36.655 17:17:15 -- accel/accel.sh@21 -- # val= 00:06:36.655 17:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # IFS=: 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # read -r var val 00:06:36.655 17:17:15 -- accel/accel.sh@21 -- # val= 00:06:36.655 17:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # IFS=: 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # read -r var val 00:06:36.655 17:17:15 -- accel/accel.sh@21 -- # val= 00:06:36.655 17:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # IFS=: 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # read -r var val 00:06:36.655 17:17:15 -- accel/accel.sh@21 -- # val= 00:06:36.655 17:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # IFS=: 00:06:36.655 17:17:15 -- accel/accel.sh@20 -- # read -r var val 00:06:36.655 17:17:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.655 17:17:15 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:36.655 17:17:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.655 00:06:36.655 real 0m2.648s 00:06:36.655 user 0m2.382s 00:06:36.655 sys 0m0.271s 00:06:36.655 17:17:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.655 17:17:15 -- common/autotest_common.sh@10 -- # set +x 00:06:36.655 ************************************ 00:06:36.655 END TEST accel_xor 00:06:36.655 ************************************ 00:06:36.655 17:17:15 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:36.655 17:17:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:36.655 17:17:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.655 17:17:15 -- common/autotest_common.sh@10 -- # set +x 00:06:36.655 ************************************ 00:06:36.655 START TEST accel_dif_verify 00:06:36.655 ************************************ 00:06:36.655 17:17:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:36.655 17:17:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.655 17:17:15 -- accel/accel.sh@17 -- # local accel_module 00:06:36.655 17:17:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:36.655 17:17:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.655 17:17:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.655 17:17:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.655 17:17:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:36.655 17:17:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.655 17:17:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.655 17:17:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.655 17:17:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.655 17:17:15 -- accel/accel.sh@42 -- # jq -r . 00:06:36.655 [2024-07-12 17:17:15.337110] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:36.655 [2024-07-12 17:17:15.337175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934627 ] 00:06:36.655 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.655 [2024-07-12 17:17:15.415655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.655 [2024-07-12 17:17:15.455727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.034 17:17:16 -- accel/accel.sh@18 -- # out=' 00:06:38.034 SPDK Configuration: 00:06:38.034 Core mask: 0x1 00:06:38.034 00:06:38.034 Accel Perf Configuration: 00:06:38.034 Workload Type: dif_verify 00:06:38.034 Vector size: 4096 bytes 00:06:38.034 Transfer size: 4096 bytes 00:06:38.034 Block size: 512 bytes 00:06:38.034 Metadata size: 8 bytes 00:06:38.034 Vector count 1 00:06:38.034 Module: software 00:06:38.034 Queue depth: 32 00:06:38.034 Allocate depth: 32 00:06:38.034 # threads/core: 1 00:06:38.034 Run time: 1 seconds 00:06:38.034 Verify: No 00:06:38.034 00:06:38.034 Running for 1 seconds... 00:06:38.034 00:06:38.034 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.034 ------------------------------------------------------------------------------------ 00:06:38.034 0,0 80320/s 318 MiB/s 0 0 00:06:38.034 ==================================================================================== 00:06:38.034 Total 80320/s 313 MiB/s 0 0' 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:38.034 17:17:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:38.034 17:17:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.034 17:17:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.034 17:17:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.034 17:17:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.034 17:17:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.034 17:17:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.034 17:17:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.034 17:17:16 -- accel/accel.sh@42 -- # jq -r . 00:06:38.034 [2024-07-12 17:17:16.656301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:38.034 [2024-07-12 17:17:16.656366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934900 ] 00:06:38.034 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.034 [2024-07-12 17:17:16.734565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.034 [2024-07-12 17:17:16.774478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val= 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val= 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val=0x1 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val= 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val= 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val=dif_verify 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val= 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val=software 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val=32 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val=32 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val=1 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val=No 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val= 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.034 17:17:16 -- accel/accel.sh@21 -- # val= 00:06:38.034 17:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.034 17:17:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.412 17:17:17 -- accel/accel.sh@21 -- # val= 00:06:39.412 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.412 17:17:17 -- accel/accel.sh@21 -- # val= 00:06:39.412 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.412 17:17:17 -- accel/accel.sh@21 -- # val= 00:06:39.412 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.412 17:17:17 -- accel/accel.sh@21 -- # val= 00:06:39.412 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.412 17:17:17 -- accel/accel.sh@21 -- # val= 00:06:39.412 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.412 17:17:17 -- accel/accel.sh@21 -- # val= 00:06:39.412 17:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.412 17:17:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.412 17:17:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.412 17:17:17 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:39.412 17:17:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.412 00:06:39.412 real 0m2.644s 00:06:39.412 user 0m2.397s 00:06:39.412 sys 0m0.255s 00:06:39.412 17:17:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.412 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:39.412 ************************************ 00:06:39.412 END TEST accel_dif_verify 00:06:39.412 ************************************ 00:06:39.412 17:17:17 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:39.412 17:17:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:39.412 17:17:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.412 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:39.412 ************************************ 00:06:39.412 START TEST accel_dif_generate 00:06:39.412 ************************************ 00:06:39.412 17:17:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:39.412 17:17:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.412 17:17:17 -- accel/accel.sh@17 -- # local accel_module 00:06:39.412 17:17:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:39.412 17:17:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:39.412 17:17:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.412 17:17:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.412 17:17:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.412 17:17:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.412 17:17:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.412 17:17:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.412 17:17:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.412 17:17:17 -- accel/accel.sh@42 -- # jq -r . 00:06:39.412 [2024-07-12 17:17:18.018610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:39.412 [2024-07-12 17:17:18.018669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935184 ] 00:06:39.412 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.412 [2024-07-12 17:17:18.092309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.412 [2024-07-12 17:17:18.133149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.349 17:17:19 -- accel/accel.sh@18 -- # out=' 00:06:40.349 SPDK Configuration: 00:06:40.349 Core mask: 0x1 00:06:40.349 00:06:40.349 Accel Perf Configuration: 00:06:40.349 Workload Type: dif_generate 00:06:40.349 Vector size: 4096 bytes 00:06:40.349 Transfer size: 4096 bytes 00:06:40.349 Block size: 512 bytes 00:06:40.349 Metadata size: 8 bytes 00:06:40.349 Vector count 1 00:06:40.349 Module: software 00:06:40.349 Queue depth: 32 00:06:40.349 Allocate depth: 32 00:06:40.349 # threads/core: 1 00:06:40.349 Run time: 1 seconds 00:06:40.349 Verify: No 00:06:40.349 00:06:40.349 Running for 1 seconds... 00:06:40.349 00:06:40.349 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.349 ------------------------------------------------------------------------------------ 00:06:40.349 0,0 97920/s 388 MiB/s 0 0 00:06:40.349 ==================================================================================== 00:06:40.349 Total 97920/s 382 MiB/s 0 0' 00:06:40.349 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.349 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.349 17:17:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:40.349 17:17:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:40.349 17:17:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.349 17:17:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.349 17:17:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.349 17:17:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.349 17:17:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.349 17:17:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.349 17:17:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.349 17:17:19 -- accel/accel.sh@42 -- # jq -r . 00:06:40.608 [2024-07-12 17:17:19.334017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:40.608 [2024-07-12 17:17:19.334094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935448 ] 00:06:40.608 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.608 [2024-07-12 17:17:19.414960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.608 [2024-07-12 17:17:19.454778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val= 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val= 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val=0x1 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val= 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val= 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val=dif_generate 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val= 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val=software 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val=32 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val=32 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val=1 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val=No 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val= 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:40.608 17:17:19 -- accel/accel.sh@21 -- # val= 00:06:40.608 17:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # IFS=: 00:06:40.608 17:17:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.987 17:17:20 -- accel/accel.sh@21 -- # val= 00:06:41.987 17:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # IFS=: 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # read -r var val 00:06:41.987 17:17:20 -- accel/accel.sh@21 -- # val= 00:06:41.987 17:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # IFS=: 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # read -r var val 00:06:41.987 17:17:20 -- accel/accel.sh@21 -- # val= 00:06:41.987 17:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # IFS=: 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # read -r var val 00:06:41.987 17:17:20 -- accel/accel.sh@21 -- # val= 00:06:41.987 17:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # IFS=: 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # read -r var val 00:06:41.987 17:17:20 -- accel/accel.sh@21 -- # val= 00:06:41.987 17:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # IFS=: 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # read -r var val 00:06:41.987 17:17:20 -- accel/accel.sh@21 -- # val= 00:06:41.987 17:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # IFS=: 00:06:41.987 17:17:20 -- accel/accel.sh@20 -- # read -r var val 00:06:41.987 17:17:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.987 17:17:20 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:41.987 17:17:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.987 00:06:41.987 real 0m2.643s 00:06:41.987 user 0m2.378s 00:06:41.987 sys 0m0.273s 00:06:41.987 17:17:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.987 17:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:41.987 ************************************ 00:06:41.987 END TEST accel_dif_generate 00:06:41.987 ************************************ 00:06:41.987 17:17:20 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:41.987 17:17:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:41.987 17:17:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.987 17:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:41.987 ************************************ 00:06:41.987 START TEST accel_dif_generate_copy 00:06:41.987 ************************************ 00:06:41.987 17:17:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:41.987 17:17:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.987 17:17:20 -- accel/accel.sh@17 -- # local accel_module 00:06:41.987 17:17:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:41.987 17:17:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:41.987 17:17:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.987 17:17:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.987 17:17:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.987 17:17:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.987 17:17:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.987 17:17:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.987 17:17:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.987 17:17:20 -- accel/accel.sh@42 -- # jq -r . 00:06:41.987 [2024-07-12 17:17:20.687351] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:41.987 [2024-07-12 17:17:20.687409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935735 ] 00:06:41.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.987 [2024-07-12 17:17:20.764672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.987 [2024-07-12 17:17:20.805289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.364 17:17:21 -- accel/accel.sh@18 -- # out=' 00:06:43.364 SPDK Configuration: 00:06:43.365 Core mask: 0x1 00:06:43.365 00:06:43.365 Accel Perf Configuration: 00:06:43.365 Workload Type: dif_generate_copy 00:06:43.365 Vector size: 4096 bytes 00:06:43.365 Transfer size: 4096 bytes 00:06:43.365 Vector count 1 00:06:43.365 Module: software 00:06:43.365 Queue depth: 32 00:06:43.365 Allocate depth: 32 00:06:43.365 # threads/core: 1 00:06:43.365 Run time: 1 seconds 00:06:43.365 Verify: No 00:06:43.365 00:06:43.365 Running for 1 seconds... 00:06:43.365 00:06:43.365 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.365 ------------------------------------------------------------------------------------ 00:06:43.365 0,0 75264/s 298 MiB/s 0 0 00:06:43.365 ==================================================================================== 00:06:43.365 Total 75264/s 294 MiB/s 0 0' 00:06:43.365 17:17:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:43.365 17:17:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:43.365 17:17:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.365 17:17:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.365 17:17:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.365 17:17:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.365 17:17:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.365 17:17:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.365 17:17:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.365 17:17:21 -- accel/accel.sh@42 -- # jq -r . 00:06:43.365 [2024-07-12 17:17:22.007984] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:43.365 [2024-07-12 17:17:22.008062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935999 ] 00:06:43.365 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.365 [2024-07-12 17:17:22.088091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.365 [2024-07-12 17:17:22.127863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val= 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val= 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val=0x1 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val= 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val= 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val= 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val=software 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val=32 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val=32 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val=1 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val=No 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val= 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:43.365 17:17:22 -- accel/accel.sh@21 -- # val= 00:06:43.365 17:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # IFS=: 00:06:43.365 17:17:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.741 17:17:23 -- accel/accel.sh@21 -- # val= 00:06:44.741 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:06:44.741 17:17:23 -- accel/accel.sh@21 -- # val= 00:06:44.741 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:06:44.741 17:17:23 -- accel/accel.sh@21 -- # val= 00:06:44.741 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:06:44.741 17:17:23 -- accel/accel.sh@21 -- # val= 00:06:44.741 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:06:44.741 17:17:23 -- accel/accel.sh@21 -- # val= 00:06:44.741 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:06:44.741 17:17:23 -- accel/accel.sh@21 -- # val= 00:06:44.741 17:17:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # IFS=: 00:06:44.741 17:17:23 -- accel/accel.sh@20 -- # read -r var val 00:06:44.741 17:17:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.741 17:17:23 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:44.741 17:17:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.741 00:06:44.741 real 0m2.638s 00:06:44.741 user 0m2.384s 00:06:44.741 sys 0m0.258s 00:06:44.741 17:17:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.741 17:17:23 -- common/autotest_common.sh@10 -- # set +x 00:06:44.741 ************************************ 00:06:44.741 END TEST accel_dif_generate_copy 00:06:44.741 ************************************ 00:06:44.741 17:17:23 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:44.741 17:17:23 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.741 17:17:23 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:44.741 17:17:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.741 17:17:23 -- common/autotest_common.sh@10 -- # set +x 00:06:44.741 ************************************ 00:06:44.741 START TEST accel_comp 00:06:44.741 ************************************ 00:06:44.741 17:17:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.741 17:17:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.742 17:17:23 -- accel/accel.sh@17 -- # local accel_module 00:06:44.742 17:17:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.742 17:17:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.742 17:17:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.742 17:17:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.742 17:17:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.742 17:17:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.742 17:17:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.742 17:17:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.742 17:17:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.742 17:17:23 -- accel/accel.sh@42 -- # jq -r . 00:06:44.742 [2024-07-12 17:17:23.372492] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:44.742 [2024-07-12 17:17:23.372600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936283 ] 00:06:44.742 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.742 [2024-07-12 17:17:23.488170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.742 [2024-07-12 17:17:23.530938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.115 17:17:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:46.115 00:06:46.115 SPDK Configuration: 00:06:46.115 Core mask: 0x1 00:06:46.115 00:06:46.115 Accel Perf Configuration: 00:06:46.115 Workload Type: compress 00:06:46.115 Transfer size: 4096 bytes 00:06:46.115 Vector count 1 00:06:46.115 Module: software 00:06:46.115 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.115 Queue depth: 32 00:06:46.115 Allocate depth: 32 00:06:46.115 # threads/core: 1 00:06:46.115 Run time: 1 seconds 00:06:46.115 Verify: No 00:06:46.115 00:06:46.115 Running for 1 seconds... 00:06:46.115 00:06:46.115 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.115 ------------------------------------------------------------------------------------ 00:06:46.115 0,0 40192/s 167 MiB/s 0 0 00:06:46.115 ==================================================================================== 00:06:46.115 Total 40192/s 157 MiB/s 0 0' 00:06:46.115 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.115 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.115 17:17:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.115 17:17:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.115 17:17:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.115 17:17:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.115 17:17:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.115 17:17:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.115 17:17:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.115 17:17:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.115 17:17:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.115 17:17:24 -- accel/accel.sh@42 -- # jq -r . 00:06:46.115 [2024-07-12 17:17:24.734730] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:46.115 [2024-07-12 17:17:24.734806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936553 ] 00:06:46.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.115 [2024-07-12 17:17:24.815714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.115 [2024-07-12 17:17:24.855536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.115 17:17:24 -- accel/accel.sh@21 -- # val= 00:06:46.115 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.115 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.115 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.115 17:17:24 -- accel/accel.sh@21 -- # val= 00:06:46.115 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.115 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.115 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.115 17:17:24 -- accel/accel.sh@21 -- # val= 00:06:46.115 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val=0x1 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val= 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val= 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val=compress 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val= 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val=software 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val=32 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val=32 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val=1 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val=No 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val= 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 17:17:24 -- accel/accel.sh@21 -- # val= 00:06:46.116 17:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.116 17:17:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.490 17:17:26 -- accel/accel.sh@21 -- # val= 00:06:47.490 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:06:47.490 17:17:26 -- accel/accel.sh@21 -- # val= 00:06:47.490 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:06:47.490 17:17:26 -- accel/accel.sh@21 -- # val= 00:06:47.490 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:06:47.490 17:17:26 -- accel/accel.sh@21 -- # val= 00:06:47.490 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:06:47.490 17:17:26 -- accel/accel.sh@21 -- # val= 00:06:47.490 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:06:47.490 17:17:26 -- accel/accel.sh@21 -- # val= 00:06:47.490 17:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # IFS=: 00:06:47.490 17:17:26 -- accel/accel.sh@20 -- # read -r var val 00:06:47.490 17:17:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.490 17:17:26 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:47.490 17:17:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.490 00:06:47.490 real 0m2.698s 00:06:47.490 user 0m2.408s 00:06:47.490 sys 0m0.294s 00:06:47.490 17:17:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.490 17:17:26 -- common/autotest_common.sh@10 -- # set +x 00:06:47.490 ************************************ 00:06:47.490 END TEST accel_comp 00:06:47.490 ************************************ 00:06:47.491 17:17:26 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:47.491 17:17:26 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:47.491 17:17:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.491 17:17:26 -- common/autotest_common.sh@10 -- # set +x 00:06:47.491 ************************************ 00:06:47.491 START TEST accel_decomp 00:06:47.491 ************************************ 00:06:47.491 17:17:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:47.491 17:17:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.491 17:17:26 -- accel/accel.sh@17 -- # local accel_module 00:06:47.491 17:17:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:47.491 17:17:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.491 17:17:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.491 17:17:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:47.491 17:17:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.491 17:17:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.491 17:17:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.491 17:17:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.491 17:17:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.491 17:17:26 -- accel/accel.sh@42 -- # jq -r . 00:06:47.491 [2024-07-12 17:17:26.102399] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:47.491 [2024-07-12 17:17:26.102473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936834 ] 00:06:47.491 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.491 [2024-07-12 17:17:26.183188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.491 [2024-07-12 17:17:26.224040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.865 17:17:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:48.865 00:06:48.865 SPDK Configuration: 00:06:48.865 Core mask: 0x1 00:06:48.865 00:06:48.865 Accel Perf Configuration: 00:06:48.865 Workload Type: decompress 00:06:48.865 Transfer size: 4096 bytes 00:06:48.865 Vector count 1 00:06:48.865 Module: software 00:06:48.865 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.865 Queue depth: 32 00:06:48.865 Allocate depth: 32 00:06:48.865 # threads/core: 1 00:06:48.865 Run time: 1 seconds 00:06:48.865 Verify: Yes 00:06:48.865 00:06:48.865 Running for 1 seconds... 00:06:48.865 00:06:48.865 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.865 ------------------------------------------------------------------------------------ 00:06:48.865 0,0 46688/s 86 MiB/s 0 0 00:06:48.865 ==================================================================================== 00:06:48.865 Total 46688/s 182 MiB/s 0 0' 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.865 17:17:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.865 17:17:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.865 17:17:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.865 17:17:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.865 17:17:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.865 17:17:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.865 17:17:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.865 17:17:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.865 17:17:27 -- accel/accel.sh@42 -- # jq -r . 00:06:48.865 [2024-07-12 17:17:27.429447] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:48.865 [2024-07-12 17:17:27.429522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937099 ] 00:06:48.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.865 [2024-07-12 17:17:27.510946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.865 [2024-07-12 17:17:27.550781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val= 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val= 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val= 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val=0x1 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val= 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val= 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val=decompress 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val= 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val=software 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val=32 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val=32 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.865 17:17:27 -- accel/accel.sh@21 -- # val=1 00:06:48.865 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.865 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.866 17:17:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.866 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.866 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.866 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.866 17:17:27 -- accel/accel.sh@21 -- # val=Yes 00:06:48.866 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.866 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.866 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.866 17:17:27 -- accel/accel.sh@21 -- # val= 00:06:48.866 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.866 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.866 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:48.866 17:17:27 -- accel/accel.sh@21 -- # val= 00:06:48.866 17:17:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.866 17:17:27 -- accel/accel.sh@20 -- # IFS=: 00:06:48.866 17:17:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.801 17:17:28 -- accel/accel.sh@21 -- # val= 00:06:49.801 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:06:49.801 17:17:28 -- accel/accel.sh@21 -- # val= 00:06:49.801 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:06:49.801 17:17:28 -- accel/accel.sh@21 -- # val= 00:06:49.801 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:06:49.801 17:17:28 -- accel/accel.sh@21 -- # val= 00:06:49.801 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:06:49.801 17:17:28 -- accel/accel.sh@21 -- # val= 00:06:49.801 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:06:49.801 17:17:28 -- accel/accel.sh@21 -- # val= 00:06:49.801 17:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # IFS=: 00:06:49.801 17:17:28 -- accel/accel.sh@20 -- # read -r var val 00:06:49.801 17:17:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.801 17:17:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:49.801 17:17:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.801 00:06:49.802 real 0m2.659s 00:06:49.802 user 0m2.403s 00:06:49.802 sys 0m0.261s 00:06:49.802 17:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.802 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:06:49.802 ************************************ 00:06:49.802 END TEST accel_decomp 00:06:49.802 ************************************ 00:06:50.061 17:17:28 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.061 17:17:28 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:50.061 17:17:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.061 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:06:50.061 ************************************ 00:06:50.061 START TEST accel_decmop_full 00:06:50.061 ************************************ 00:06:50.061 17:17:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.061 17:17:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.061 17:17:28 -- accel/accel.sh@17 -- # local accel_module 00:06:50.061 17:17:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.061 17:17:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.061 17:17:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.061 17:17:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.061 17:17:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.061 17:17:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.061 17:17:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.061 17:17:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.061 17:17:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.061 17:17:28 -- accel/accel.sh@42 -- # jq -r . 00:06:50.061 [2024-07-12 17:17:28.800745] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.061 [2024-07-12 17:17:28.800809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937386 ] 00:06:50.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.061 [2024-07-12 17:17:28.872694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.061 [2024-07-12 17:17:28.913412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.437 17:17:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:51.437 00:06:51.437 SPDK Configuration: 00:06:51.437 Core mask: 0x1 00:06:51.437 00:06:51.437 Accel Perf Configuration: 00:06:51.437 Workload Type: decompress 00:06:51.437 Transfer size: 111250 bytes 00:06:51.437 Vector count 1 00:06:51.437 Module: software 00:06:51.437 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.437 Queue depth: 32 00:06:51.437 Allocate depth: 32 00:06:51.437 # threads/core: 1 00:06:51.437 Run time: 1 seconds 00:06:51.437 Verify: Yes 00:06:51.437 00:06:51.437 Running for 1 seconds... 00:06:51.437 00:06:51.437 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.437 ------------------------------------------------------------------------------------ 00:06:51.437 0,0 3136/s 129 MiB/s 0 0 00:06:51.437 ==================================================================================== 00:06:51.437 Total 3136/s 332 MiB/s 0 0' 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.437 17:17:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.437 17:17:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.437 17:17:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.437 17:17:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.437 17:17:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.437 17:17:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.437 17:17:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.437 17:17:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.437 17:17:30 -- accel/accel.sh@42 -- # jq -r . 00:06:51.437 [2024-07-12 17:17:30.132736] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:51.437 [2024-07-12 17:17:30.132814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937595 ] 00:06:51.437 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.437 [2024-07-12 17:17:30.212654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.437 [2024-07-12 17:17:30.253297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val= 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val= 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val= 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val=0x1 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val= 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val= 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val=decompress 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val= 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val=software 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val=32 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val=32 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val=1 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val=Yes 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val= 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:51.437 17:17:30 -- accel/accel.sh@21 -- # val= 00:06:51.437 17:17:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # IFS=: 00:06:51.437 17:17:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 17:17:31 -- accel/accel.sh@21 -- # val= 00:06:52.815 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 17:17:31 -- accel/accel.sh@21 -- # val= 00:06:52.815 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 17:17:31 -- accel/accel.sh@21 -- # val= 00:06:52.815 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 17:17:31 -- accel/accel.sh@21 -- # val= 00:06:52.815 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 17:17:31 -- accel/accel.sh@21 -- # val= 00:06:52.815 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 17:17:31 -- accel/accel.sh@21 -- # val= 00:06:52.815 17:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # IFS=: 00:06:52.815 17:17:31 -- accel/accel.sh@20 -- # read -r var val 00:06:52.815 17:17:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.815 17:17:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:52.815 17:17:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.815 00:06:52.815 real 0m2.672s 00:06:52.815 user 0m2.424s 00:06:52.815 sys 0m0.254s 00:06:52.815 17:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.815 17:17:31 -- common/autotest_common.sh@10 -- # set +x 00:06:52.815 ************************************ 00:06:52.815 END TEST accel_decmop_full 00:06:52.815 ************************************ 00:06:52.815 17:17:31 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.815 17:17:31 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:52.815 17:17:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.815 17:17:31 -- common/autotest_common.sh@10 -- # set +x 00:06:52.815 ************************************ 00:06:52.815 START TEST accel_decomp_mcore 00:06:52.815 ************************************ 00:06:52.815 17:17:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.815 17:17:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.815 17:17:31 -- accel/accel.sh@17 -- # local accel_module 00:06:52.815 17:17:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.815 17:17:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.816 17:17:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.816 17:17:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.816 17:17:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.816 17:17:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.816 17:17:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.816 17:17:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.816 17:17:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.816 17:17:31 -- accel/accel.sh@42 -- # jq -r . 00:06:52.816 [2024-07-12 17:17:31.515184] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:52.816 [2024-07-12 17:17:31.515274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937833 ] 00:06:52.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.816 [2024-07-12 17:17:31.584573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.816 [2024-07-12 17:17:31.628495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.816 [2024-07-12 17:17:31.628600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.816 [2024-07-12 17:17:31.628704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.816 [2024-07-12 17:17:31.628707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.190 17:17:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:54.190 00:06:54.190 SPDK Configuration: 00:06:54.190 Core mask: 0xf 00:06:54.190 00:06:54.190 Accel Perf Configuration: 00:06:54.190 Workload Type: decompress 00:06:54.190 Transfer size: 4096 bytes 00:06:54.190 Vector count 1 00:06:54.190 Module: software 00:06:54.190 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.190 Queue depth: 32 00:06:54.190 Allocate depth: 32 00:06:54.190 # threads/core: 1 00:06:54.190 Run time: 1 seconds 00:06:54.190 Verify: Yes 00:06:54.190 00:06:54.190 Running for 1 seconds... 00:06:54.190 00:06:54.190 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.190 ------------------------------------------------------------------------------------ 00:06:54.190 0,0 42464/s 78 MiB/s 0 0 00:06:54.190 3,0 42656/s 78 MiB/s 0 0 00:06:54.190 2,0 67648/s 124 MiB/s 0 0 00:06:54.190 1,0 42624/s 78 MiB/s 0 0 00:06:54.190 ==================================================================================== 00:06:54.190 Total 195392/s 763 MiB/s 0 0' 00:06:54.190 17:17:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.190 17:17:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.190 17:17:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.190 17:17:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.190 17:17:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.190 17:17:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.190 17:17:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.190 17:17:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.190 17:17:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.190 17:17:32 -- accel/accel.sh@42 -- # jq -r . 00:06:54.190 [2024-07-12 17:17:32.841448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.190 [2024-07-12 17:17:32.841530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938038 ] 00:06:54.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.190 [2024-07-12 17:17:32.923722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.190 [2024-07-12 17:17:32.966670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.190 [2024-07-12 17:17:32.966774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.190 [2024-07-12 17:17:32.966880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.190 [2024-07-12 17:17:32.966883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val= 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val= 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val= 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val=0xf 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val= 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val= 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val=decompress 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val= 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val=software 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val=32 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val=32 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val=1 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val=Yes 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val= 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:54.190 17:17:33 -- accel/accel.sh@21 -- # val= 00:06:54.190 17:17:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # IFS=: 00:06:54.190 17:17:33 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@21 -- # val= 00:06:55.565 17:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # IFS=: 00:06:55.565 17:17:34 -- accel/accel.sh@20 -- # read -r var val 00:06:55.565 17:17:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.565 17:17:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:55.565 17:17:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.565 00:06:55.565 real 0m2.671s 00:06:55.565 user 0m9.110s 00:06:55.565 sys 0m0.278s 00:06:55.565 17:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.565 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:55.565 ************************************ 00:06:55.565 END TEST accel_decomp_mcore 00:06:55.565 ************************************ 00:06:55.565 17:17:34 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.565 17:17:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:55.565 17:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.565 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:55.565 ************************************ 00:06:55.565 START TEST accel_decomp_full_mcore 00:06:55.565 ************************************ 00:06:55.565 17:17:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.565 17:17:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.565 17:17:34 -- accel/accel.sh@17 -- # local accel_module 00:06:55.565 17:17:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.565 17:17:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.566 17:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.566 17:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.566 17:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.566 17:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.566 17:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.566 17:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.566 17:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.566 17:17:34 -- accel/accel.sh@42 -- # jq -r . 00:06:55.566 [2024-07-12 17:17:34.225939] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:55.566 [2024-07-12 17:17:34.226046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938281 ] 00:06:55.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.566 [2024-07-12 17:17:34.342160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.566 [2024-07-12 17:17:34.387707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.566 [2024-07-12 17:17:34.387810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.566 [2024-07-12 17:17:34.387915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.566 [2024-07-12 17:17:34.387918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.942 17:17:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:56.942 00:06:56.942 SPDK Configuration: 00:06:56.942 Core mask: 0xf 00:06:56.942 00:06:56.942 Accel Perf Configuration: 00:06:56.942 Workload Type: decompress 00:06:56.942 Transfer size: 111250 bytes 00:06:56.942 Vector count 1 00:06:56.942 Module: software 00:06:56.942 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.942 Queue depth: 32 00:06:56.942 Allocate depth: 32 00:06:56.942 # threads/core: 1 00:06:56.942 Run time: 1 seconds 00:06:56.942 Verify: Yes 00:06:56.942 00:06:56.942 Running for 1 seconds... 00:06:56.942 00:06:56.942 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.942 ------------------------------------------------------------------------------------ 00:06:56.942 0,0 3136/s 129 MiB/s 0 0 00:06:56.942 3,0 3136/s 129 MiB/s 0 0 00:06:56.942 2,0 5216/s 215 MiB/s 0 0 00:06:56.942 1,0 3136/s 129 MiB/s 0 0 00:06:56.942 ==================================================================================== 00:06:56.942 Total 14624/s 1551 MiB/s 0 0' 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:56.942 17:17:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:56.942 17:17:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.942 17:17:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.942 17:17:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.942 17:17:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.942 17:17:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.942 17:17:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.942 17:17:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.942 17:17:35 -- accel/accel.sh@42 -- # jq -r . 00:06:56.942 [2024-07-12 17:17:35.615430] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:56.942 [2024-07-12 17:17:35.615503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938526 ] 00:06:56.942 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.942 [2024-07-12 17:17:35.695971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.942 [2024-07-12 17:17:35.739332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.942 [2024-07-12 17:17:35.739434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.942 [2024-07-12 17:17:35.739542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.942 [2024-07-12 17:17:35.739545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val= 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val= 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val= 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val=0xf 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val= 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val= 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val=decompress 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val= 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val=software 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val=32 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val=32 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val=1 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val=Yes 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val= 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.942 17:17:35 -- accel/accel.sh@21 -- # val= 00:06:56.942 17:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # IFS=: 00:06:56.942 17:17:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@21 -- # val= 00:06:58.319 17:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.319 17:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.319 17:17:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.319 17:17:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:58.319 17:17:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.319 00:06:58.319 real 0m2.751s 00:06:58.319 user 0m9.247s 00:06:58.319 sys 0m0.309s 00:06:58.319 17:17:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.319 17:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:58.319 ************************************ 00:06:58.319 END TEST accel_decomp_full_mcore 00:06:58.319 ************************************ 00:06:58.319 17:17:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:58.319 17:17:36 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:58.319 17:17:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.319 17:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:58.319 ************************************ 00:06:58.319 START TEST accel_decomp_mthread 00:06:58.319 ************************************ 00:06:58.319 17:17:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:58.319 17:17:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.319 17:17:36 -- accel/accel.sh@17 -- # local accel_module 00:06:58.319 17:17:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:58.319 17:17:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:58.319 17:17:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.319 17:17:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.319 17:17:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.319 17:17:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.319 17:17:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.319 17:17:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.319 17:17:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.319 17:17:36 -- accel/accel.sh@42 -- # jq -r . 00:06:58.319 [2024-07-12 17:17:37.013227] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:58.319 [2024-07-12 17:17:37.013354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938802 ] 00:06:58.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.319 [2024-07-12 17:17:37.129083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.319 [2024-07-12 17:17:37.171208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.696 17:17:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:59.696 00:06:59.696 SPDK Configuration: 00:06:59.696 Core mask: 0x1 00:06:59.696 00:06:59.696 Accel Perf Configuration: 00:06:59.696 Workload Type: decompress 00:06:59.696 Transfer size: 4096 bytes 00:06:59.696 Vector count 1 00:06:59.696 Module: software 00:06:59.696 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.696 Queue depth: 32 00:06:59.696 Allocate depth: 32 00:06:59.696 # threads/core: 2 00:06:59.696 Run time: 1 seconds 00:06:59.696 Verify: Yes 00:06:59.696 00:06:59.696 Running for 1 seconds... 00:06:59.696 00:06:59.696 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.696 ------------------------------------------------------------------------------------ 00:06:59.696 0,1 23712/s 43 MiB/s 0 0 00:06:59.696 0,0 23616/s 43 MiB/s 0 0 00:06:59.696 ==================================================================================== 00:06:59.696 Total 47328/s 184 MiB/s 0 0' 00:06:59.696 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.696 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.696 17:17:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.696 17:17:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.696 17:17:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.696 17:17:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.696 17:17:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.696 17:17:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.696 17:17:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.696 17:17:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.696 17:17:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.696 17:17:38 -- accel/accel.sh@42 -- # jq -r . 00:06:59.697 [2024-07-12 17:17:38.381661] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:59.697 [2024-07-12 17:17:38.381736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939068 ] 00:06:59.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.697 [2024-07-12 17:17:38.462369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.697 [2024-07-12 17:17:38.502208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val= 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val= 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val= 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val=0x1 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val= 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val= 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val=decompress 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val= 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val=software 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val=32 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val=32 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val=2 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val=Yes 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val= 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.697 17:17:38 -- accel/accel.sh@21 -- # val= 00:06:59.697 17:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:59.697 17:17:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.073 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:01.073 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.073 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:01.073 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.073 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:01.073 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.073 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:01.073 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.073 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:01.073 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.073 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:01.073 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.073 17:17:39 -- accel/accel.sh@21 -- # val= 00:07:01.073 17:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.073 17:17:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.073 17:17:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.073 17:17:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:01.073 17:17:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.073 00:07:01.073 real 0m2.707s 00:07:01.073 user 0m2.409s 00:07:01.073 sys 0m0.304s 00:07:01.073 17:17:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.073 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.073 ************************************ 00:07:01.073 END TEST accel_decomp_mthread 00:07:01.073 ************************************ 00:07:01.073 17:17:39 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.073 17:17:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:01.073 17:17:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.073 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.073 ************************************ 00:07:01.073 START TEST accel_deomp_full_mthread 00:07:01.074 ************************************ 00:07:01.074 17:17:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.074 17:17:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.074 17:17:39 -- accel/accel.sh@17 -- # local accel_module 00:07:01.074 17:17:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.074 17:17:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.074 17:17:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.074 17:17:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.074 17:17:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.074 17:17:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.074 17:17:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.074 17:17:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.074 17:17:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.074 17:17:39 -- accel/accel.sh@42 -- # jq -r . 00:07:01.074 [2024-07-12 17:17:39.736706] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:01.074 [2024-07-12 17:17:39.736749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939350 ] 00:07:01.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.074 [2024-07-12 17:17:39.804393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.074 [2024-07-12 17:17:39.845220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.473 17:17:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:02.473 00:07:02.473 SPDK Configuration: 00:07:02.473 Core mask: 0x1 00:07:02.473 00:07:02.473 Accel Perf Configuration: 00:07:02.473 Workload Type: decompress 00:07:02.473 Transfer size: 111250 bytes 00:07:02.473 Vector count 1 00:07:02.473 Module: software 00:07:02.473 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.473 Queue depth: 32 00:07:02.473 Allocate depth: 32 00:07:02.473 # threads/core: 2 00:07:02.473 Run time: 1 seconds 00:07:02.473 Verify: Yes 00:07:02.473 00:07:02.473 Running for 1 seconds... 00:07:02.473 00:07:02.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.473 ------------------------------------------------------------------------------------ 00:07:02.473 0,1 1600/s 66 MiB/s 0 0 00:07:02.473 0,0 1600/s 66 MiB/s 0 0 00:07:02.473 ==================================================================================== 00:07:02.473 Total 3200/s 339 MiB/s 0 0' 00:07:02.473 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 17:17:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.473 17:17:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.473 17:17:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.473 17:17:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.473 17:17:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.473 17:17:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.473 17:17:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.473 17:17:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.473 17:17:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.473 17:17:41 -- accel/accel.sh@42 -- # jq -r . 00:07:02.473 [2024-07-12 17:17:41.084065] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:02.473 [2024-07-12 17:17:41.084128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939622 ] 00:07:02.473 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.473 [2024-07-12 17:17:41.164522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.473 [2024-07-12 17:17:41.204663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.473 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:02.473 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:02.473 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val=0x1 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val=decompress 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val=software 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val=32 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val=32 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val=2 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val=Yes 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:02.474 17:17:41 -- accel/accel.sh@21 -- # val= 00:07:02.474 17:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # IFS=: 00:07:02.474 17:17:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.462 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:03.462 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:03.462 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:03.462 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:03.462 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:03.462 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:03.462 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:03.462 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:03.462 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:03.462 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:03.462 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:03.462 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:03.462 17:17:42 -- accel/accel.sh@21 -- # val= 00:07:03.462 17:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # IFS=: 00:07:03.462 17:17:42 -- accel/accel.sh@20 -- # read -r var val 00:07:03.462 17:17:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.462 17:17:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:03.462 17:17:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.462 00:07:03.462 real 0m2.697s 00:07:03.462 user 0m2.447s 00:07:03.462 sys 0m0.255s 00:07:03.462 17:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.462 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:03.462 ************************************ 00:07:03.462 END TEST accel_deomp_full_mthread 00:07:03.462 ************************************ 00:07:03.722 17:17:42 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:03.722 17:17:42 -- accel/accel.sh@129 -- # build_accel_config 00:07:03.722 17:17:42 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:03.722 17:17:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.722 17:17:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.722 17:17:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.722 17:17:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:03.722 17:17:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.722 17:17:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.722 17:17:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.722 17:17:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.722 17:17:42 -- accel/accel.sh@42 -- # jq -r . 00:07:03.722 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:03.722 ************************************ 00:07:03.722 START TEST accel_dif_functional_tests 00:07:03.722 ************************************ 00:07:03.722 17:17:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:03.722 [2024-07-12 17:17:42.483605] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.722 [2024-07-12 17:17:42.483653] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939906 ] 00:07:03.722 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.722 [2024-07-12 17:17:42.553374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.722 [2024-07-12 17:17:42.596109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.722 [2024-07-12 17:17:42.596212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.722 [2024-07-12 17:17:42.596213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.722 00:07:03.722 00:07:03.722 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.722 http://cunit.sourceforge.net/ 00:07:03.722 00:07:03.722 00:07:03.722 Suite: accel_dif 00:07:03.722 Test: verify: DIF generated, GUARD check ...passed 00:07:03.722 Test: verify: DIF generated, APPTAG check ...passed 00:07:03.722 Test: verify: DIF generated, REFTAG check ...passed 00:07:03.722 Test: verify: DIF not generated, GUARD check ...[2024-07-12 17:17:42.664951] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:03.722 [2024-07-12 17:17:42.665015] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:03.722 passed 00:07:03.722 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 17:17:42.665055] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:03.722 [2024-07-12 17:17:42.665075] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:03.722 passed 00:07:03.722 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 17:17:42.665100] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:03.722 [2024-07-12 17:17:42.665118] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:03.722 passed 00:07:03.722 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:03.722 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 17:17:42.665177] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:03.722 passed 00:07:03.722 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:03.722 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:03.722 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:03.722 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 17:17:42.665319] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:03.722 passed 00:07:03.722 Test: generate copy: DIF generated, GUARD check ...passed 00:07:03.722 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:03.722 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:03.722 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:03.722 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:03.722 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:03.722 Test: generate copy: iovecs-len validate ...[2024-07-12 17:17:42.665545] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:03.722 passed 00:07:03.722 Test: generate copy: buffer alignment validate ...passed 00:07:03.722 00:07:03.722 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.722 suites 1 1 n/a 0 0 00:07:03.722 tests 20 20 20 0 0 00:07:03.722 asserts 204 204 204 0 n/a 00:07:03.722 00:07:03.722 Elapsed time = 0.002 seconds 00:07:03.981 00:07:03.981 real 0m0.371s 00:07:03.981 user 0m0.577s 00:07:03.981 sys 0m0.148s 00:07:03.981 17:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.981 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:03.981 ************************************ 00:07:03.981 END TEST accel_dif_functional_tests 00:07:03.981 ************************************ 00:07:03.981 00:07:03.981 real 0m56.715s 00:07:03.981 user 1m4.636s 00:07:03.981 sys 0m7.011s 00:07:03.981 17:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.981 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:03.981 ************************************ 00:07:03.981 END TEST accel 00:07:03.981 ************************************ 00:07:03.981 17:17:42 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:03.981 17:17:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:03.981 17:17:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.981 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:03.981 ************************************ 00:07:03.981 START TEST accel_rpc 00:07:03.981 ************************************ 00:07:03.981 17:17:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:04.239 * Looking for test storage... 00:07:04.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:04.239 17:17:42 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:04.239 17:17:42 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3940069 00:07:04.239 17:17:42 -- accel/accel_rpc.sh@15 -- # waitforlisten 3940069 00:07:04.239 17:17:42 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:04.239 17:17:42 -- common/autotest_common.sh@819 -- # '[' -z 3940069 ']' 00:07:04.239 17:17:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.239 17:17:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:04.239 17:17:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.239 17:17:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:04.239 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.239 [2024-07-12 17:17:43.041914] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:04.239 [2024-07-12 17:17:43.041980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940069 ] 00:07:04.239 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.239 [2024-07-12 17:17:43.124430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.239 [2024-07-12 17:17:43.167439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.239 [2024-07-12 17:17:43.167600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.497 17:17:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:04.497 17:17:43 -- common/autotest_common.sh@852 -- # return 0 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:04.497 17:17:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:04.497 17:17:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.497 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.497 ************************************ 00:07:04.497 START TEST accel_assign_opcode 00:07:04.497 ************************************ 00:07:04.497 17:17:43 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:04.497 17:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.497 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.497 [2024-07-12 17:17:43.248136] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:04.497 17:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:04.497 17:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.497 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.497 [2024-07-12 17:17:43.256152] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:04.497 17:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:04.497 17:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.497 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.497 17:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:04.497 17:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.497 17:17:43 -- accel/accel_rpc.sh@42 -- # grep software 00:07:04.497 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.497 17:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.756 software 00:07:04.756 00:07:04.756 real 0m0.239s 00:07:04.756 user 0m0.045s 00:07:04.756 sys 0m0.013s 00:07:04.756 17:17:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.756 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.756 ************************************ 00:07:04.756 END TEST accel_assign_opcode 00:07:04.756 ************************************ 00:07:04.756 17:17:43 -- accel/accel_rpc.sh@55 -- # killprocess 3940069 00:07:04.756 17:17:43 -- common/autotest_common.sh@926 -- # '[' -z 3940069 ']' 00:07:04.756 17:17:43 -- common/autotest_common.sh@930 -- # kill -0 3940069 00:07:04.756 17:17:43 -- common/autotest_common.sh@931 -- # uname 00:07:04.756 17:17:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:04.756 17:17:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3940069 00:07:04.756 17:17:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:04.756 17:17:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:04.756 17:17:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3940069' 00:07:04.756 killing process with pid 3940069 00:07:04.756 17:17:43 -- common/autotest_common.sh@945 -- # kill 3940069 00:07:04.756 17:17:43 -- common/autotest_common.sh@950 -- # wait 3940069 00:07:05.015 00:07:05.015 real 0m0.952s 00:07:05.015 user 0m0.905s 00:07:05.015 sys 0m0.418s 00:07:05.015 17:17:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.015 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.015 ************************************ 00:07:05.015 END TEST accel_rpc 00:07:05.015 ************************************ 00:07:05.015 17:17:43 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:05.015 17:17:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:05.015 17:17:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.015 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.015 ************************************ 00:07:05.015 START TEST app_cmdline 00:07:05.015 ************************************ 00:07:05.015 17:17:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:05.015 * Looking for test storage... 00:07:05.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.274 17:17:43 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:05.274 17:17:43 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3940304 00:07:05.274 17:17:43 -- app/cmdline.sh@18 -- # waitforlisten 3940304 00:07:05.274 17:17:43 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:05.274 17:17:43 -- common/autotest_common.sh@819 -- # '[' -z 3940304 ']' 00:07:05.274 17:17:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.274 17:17:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:05.274 17:17:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.274 17:17:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:05.274 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:07:05.274 [2024-07-12 17:17:44.036772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:05.274 [2024-07-12 17:17:44.036834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940304 ] 00:07:05.274 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.274 [2024-07-12 17:17:44.118079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.274 [2024-07-12 17:17:44.158911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.274 [2024-07-12 17:17:44.159070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.212 17:17:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:06.212 17:17:44 -- common/autotest_common.sh@852 -- # return 0 00:07:06.212 17:17:44 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:06.470 { 00:07:06.470 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:07:06.470 "fields": { 00:07:06.470 "major": 24, 00:07:06.470 "minor": 1, 00:07:06.470 "patch": 1, 00:07:06.470 "suffix": "-pre", 00:07:06.470 "commit": "4b94202c6" 00:07:06.470 } 00:07:06.470 } 00:07:06.470 17:17:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:06.470 17:17:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:06.470 17:17:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:06.470 17:17:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:06.470 17:17:45 -- app/cmdline.sh@26 -- # sort 00:07:06.470 17:17:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:06.470 17:17:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:06.470 17:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:06.470 17:17:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.470 17:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:06.470 17:17:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:06.470 17:17:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:06.470 17:17:45 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.470 17:17:45 -- common/autotest_common.sh@640 -- # local es=0 00:07:06.470 17:17:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.470 17:17:45 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.470 17:17:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.470 17:17:45 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.470 17:17:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.471 17:17:45 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.471 17:17:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.471 17:17:45 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.471 17:17:45 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:06.471 17:17:45 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.729 request: 00:07:06.729 { 00:07:06.729 "method": "env_dpdk_get_mem_stats", 00:07:06.729 "req_id": 1 00:07:06.729 } 00:07:06.729 Got JSON-RPC error response 00:07:06.729 response: 00:07:06.729 { 00:07:06.729 "code": -32601, 00:07:06.729 "message": "Method not found" 00:07:06.729 } 00:07:06.729 17:17:45 -- common/autotest_common.sh@643 -- # es=1 00:07:06.729 17:17:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:06.729 17:17:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:06.729 17:17:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:06.730 17:17:45 -- app/cmdline.sh@1 -- # killprocess 3940304 00:07:06.730 17:17:45 -- common/autotest_common.sh@926 -- # '[' -z 3940304 ']' 00:07:06.730 17:17:45 -- common/autotest_common.sh@930 -- # kill -0 3940304 00:07:06.730 17:17:45 -- common/autotest_common.sh@931 -- # uname 00:07:06.730 17:17:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:06.730 17:17:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3940304 00:07:06.730 17:17:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:06.730 17:17:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:06.730 17:17:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3940304' 00:07:06.730 killing process with pid 3940304 00:07:06.730 17:17:45 -- common/autotest_common.sh@945 -- # kill 3940304 00:07:06.730 17:17:45 -- common/autotest_common.sh@950 -- # wait 3940304 00:07:07.012 00:07:07.012 real 0m1.943s 00:07:07.012 user 0m2.498s 00:07:07.012 sys 0m0.448s 00:07:07.012 17:17:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.012 17:17:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.012 ************************************ 00:07:07.012 END TEST app_cmdline 00:07:07.012 ************************************ 00:07:07.012 17:17:45 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:07.012 17:17:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:07.012 17:17:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.012 17:17:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.012 ************************************ 00:07:07.012 START TEST version 00:07:07.012 ************************************ 00:07:07.012 17:17:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:07.012 * Looking for test storage... 00:07:07.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:07.012 17:17:45 -- app/version.sh@17 -- # get_header_version major 00:07:07.012 17:17:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.012 17:17:45 -- app/version.sh@14 -- # cut -f2 00:07:07.012 17:17:45 -- app/version.sh@14 -- # tr -d '"' 00:07:07.012 17:17:45 -- app/version.sh@17 -- # major=24 00:07:07.012 17:17:45 -- app/version.sh@18 -- # get_header_version minor 00:07:07.012 17:17:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.012 17:17:45 -- app/version.sh@14 -- # cut -f2 00:07:07.012 17:17:45 -- app/version.sh@14 -- # tr -d '"' 00:07:07.271 17:17:45 -- app/version.sh@18 -- # minor=1 00:07:07.271 17:17:45 -- app/version.sh@19 -- # get_header_version patch 00:07:07.271 17:17:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.271 17:17:45 -- app/version.sh@14 -- # cut -f2 00:07:07.271 17:17:45 -- app/version.sh@14 -- # tr -d '"' 00:07:07.271 17:17:45 -- app/version.sh@19 -- # patch=1 00:07:07.271 17:17:45 -- app/version.sh@20 -- # get_header_version suffix 00:07:07.271 17:17:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.271 17:17:45 -- app/version.sh@14 -- # cut -f2 00:07:07.271 17:17:45 -- app/version.sh@14 -- # tr -d '"' 00:07:07.271 17:17:45 -- app/version.sh@20 -- # suffix=-pre 00:07:07.271 17:17:45 -- app/version.sh@22 -- # version=24.1 00:07:07.271 17:17:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:07.271 17:17:45 -- app/version.sh@25 -- # version=24.1.1 00:07:07.271 17:17:45 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:07.271 17:17:45 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:07.271 17:17:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:07.271 17:17:46 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:07.271 17:17:46 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:07.271 00:07:07.272 real 0m0.161s 00:07:07.272 user 0m0.079s 00:07:07.272 sys 0m0.119s 00:07:07.272 17:17:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.272 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.272 ************************************ 00:07:07.272 END TEST version 00:07:07.272 ************************************ 00:07:07.272 17:17:46 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:07.272 17:17:46 -- spdk/autotest.sh@204 -- # uname -s 00:07:07.272 17:17:46 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:07.272 17:17:46 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:07.272 17:17:46 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:07.272 17:17:46 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:07.272 17:17:46 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:07.272 17:17:46 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:07.272 17:17:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:07.272 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.272 17:17:46 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:07.272 17:17:46 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:07.272 17:17:46 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:07.272 17:17:46 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:07.272 17:17:46 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:07.272 17:17:46 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:07.272 17:17:46 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.272 17:17:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:07.272 17:17:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.272 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.272 ************************************ 00:07:07.272 START TEST nvmf_tcp 00:07:07.272 ************************************ 00:07:07.272 17:17:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.272 * Looking for test storage... 00:07:07.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:07.272 17:17:46 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:07.272 17:17:46 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.272 17:17:46 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.272 17:17:46 -- nvmf/common.sh@7 -- # uname -s 00:07:07.272 17:17:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.272 17:17:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.272 17:17:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.272 17:17:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.272 17:17:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.272 17:17:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.272 17:17:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.272 17:17:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.272 17:17:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.272 17:17:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.272 17:17:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:07.272 17:17:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:07.272 17:17:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.272 17:17:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.272 17:17:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.272 17:17:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.272 17:17:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.272 17:17:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.272 17:17:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.272 17:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.272 17:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.272 17:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.272 17:17:46 -- paths/export.sh@5 -- # export PATH 00:07:07.272 17:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.272 17:17:46 -- nvmf/common.sh@46 -- # : 0 00:07:07.272 17:17:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:07.272 17:17:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:07.272 17:17:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:07.272 17:17:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.272 17:17:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.272 17:17:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:07.272 17:17:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:07.272 17:17:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:07.272 17:17:46 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:07.272 17:17:46 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:07.272 17:17:46 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:07.272 17:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:07.272 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.531 17:17:46 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:07.531 17:17:46 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:07.531 17:17:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:07.531 17:17:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.531 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.531 ************************************ 00:07:07.531 START TEST nvmf_example 00:07:07.531 ************************************ 00:07:07.531 17:17:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:07.531 * Looking for test storage... 00:07:07.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.531 17:17:46 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.531 17:17:46 -- nvmf/common.sh@7 -- # uname -s 00:07:07.531 17:17:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.531 17:17:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.531 17:17:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.531 17:17:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.531 17:17:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.531 17:17:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.531 17:17:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.531 17:17:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.531 17:17:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.531 17:17:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.531 17:17:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:07.531 17:17:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:07.531 17:17:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.531 17:17:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.531 17:17:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.531 17:17:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.531 17:17:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.531 17:17:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.531 17:17:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.531 17:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.531 17:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.531 17:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.531 17:17:46 -- paths/export.sh@5 -- # export PATH 00:07:07.531 17:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.531 17:17:46 -- nvmf/common.sh@46 -- # : 0 00:07:07.531 17:17:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:07.531 17:17:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:07.531 17:17:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:07.531 17:17:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.531 17:17:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.531 17:17:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:07.531 17:17:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:07.531 17:17:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:07.531 17:17:46 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:07.531 17:17:46 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:07.531 17:17:46 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:07.531 17:17:46 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:07.531 17:17:46 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:07.531 17:17:46 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:07.531 17:17:46 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:07.531 17:17:46 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:07.531 17:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:07.531 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.531 17:17:46 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:07.531 17:17:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:07.531 17:17:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.531 17:17:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:07.531 17:17:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:07.531 17:17:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:07.531 17:17:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.531 17:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.531 17:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.531 17:17:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:07.531 17:17:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:07.531 17:17:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:07.531 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:12.804 17:17:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:12.804 17:17:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:12.804 17:17:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:12.804 17:17:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:12.804 17:17:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:12.804 17:17:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:12.804 17:17:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:12.804 17:17:51 -- nvmf/common.sh@294 -- # net_devs=() 00:07:12.804 17:17:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:12.804 17:17:51 -- nvmf/common.sh@295 -- # e810=() 00:07:12.804 17:17:51 -- nvmf/common.sh@295 -- # local -ga e810 00:07:12.804 17:17:51 -- nvmf/common.sh@296 -- # x722=() 00:07:12.804 17:17:51 -- nvmf/common.sh@296 -- # local -ga x722 00:07:12.804 17:17:51 -- nvmf/common.sh@297 -- # mlx=() 00:07:12.804 17:17:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:12.804 17:17:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.804 17:17:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:12.804 17:17:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:12.804 17:17:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:12.804 17:17:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:12.804 17:17:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:12.804 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:12.804 17:17:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:12.804 17:17:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:12.804 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:12.804 17:17:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:12.804 17:17:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:12.804 17:17:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:12.805 17:17:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:12.805 17:17:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:12.805 17:17:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.805 17:17:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:12.805 17:17:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.805 17:17:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:12.805 Found net devices under 0000:af:00.0: cvl_0_0 00:07:12.805 17:17:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.805 17:17:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:12.805 17:17:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.805 17:17:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:12.805 17:17:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.805 17:17:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:12.805 Found net devices under 0000:af:00.1: cvl_0_1 00:07:12.805 17:17:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.805 17:17:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:12.805 17:17:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:12.805 17:17:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:12.805 17:17:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:12.805 17:17:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:12.805 17:17:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.805 17:17:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.805 17:17:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.805 17:17:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:12.805 17:17:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.805 17:17:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.805 17:17:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:12.805 17:17:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.805 17:17:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.805 17:17:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:12.805 17:17:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:12.805 17:17:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.805 17:17:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.064 17:17:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.064 17:17:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.064 17:17:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:13.064 17:17:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.064 17:17:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.064 17:17:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.064 17:17:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:13.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:07:13.064 00:07:13.064 --- 10.0.0.2 ping statistics --- 00:07:13.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.064 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:07:13.064 17:17:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:07:13.064 00:07:13.064 --- 10.0.0.1 ping statistics --- 00:07:13.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.064 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:07:13.064 17:17:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.064 17:17:51 -- nvmf/common.sh@410 -- # return 0 00:07:13.064 17:17:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:13.064 17:17:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.064 17:17:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:13.064 17:17:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:13.064 17:17:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.064 17:17:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:13.064 17:17:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:13.064 17:17:51 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:13.064 17:17:51 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:13.064 17:17:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:13.064 17:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:13.064 17:17:51 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:13.064 17:17:51 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:13.064 17:17:51 -- target/nvmf_example.sh@34 -- # nvmfpid=3944067 00:07:13.064 17:17:51 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.064 17:17:51 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:13.064 17:17:51 -- target/nvmf_example.sh@36 -- # waitforlisten 3944067 00:07:13.064 17:17:51 -- common/autotest_common.sh@819 -- # '[' -z 3944067 ']' 00:07:13.064 17:17:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.064 17:17:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:13.064 17:17:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.064 17:17:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:13.064 17:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:13.323 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.259 17:17:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:14.259 17:17:52 -- common/autotest_common.sh@852 -- # return 0 00:07:14.259 17:17:52 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:14.259 17:17:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:14.259 17:17:52 -- common/autotest_common.sh@10 -- # set +x 00:07:14.259 17:17:52 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.259 17:17:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.259 17:17:52 -- common/autotest_common.sh@10 -- # set +x 00:07:14.259 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.259 17:17:53 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:14.259 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.259 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:07:14.259 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.259 17:17:53 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:14.259 17:17:53 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:14.259 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.259 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:07:14.259 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.259 17:17:53 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:14.259 17:17:53 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:14.259 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.259 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:07:14.259 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.259 17:17:53 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.259 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.259 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:07:14.259 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.259 17:17:53 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:14.259 17:17:53 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:14.259 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.462 Initializing NVMe Controllers 00:07:26.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:26.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:26.462 Initialization complete. Launching workers. 00:07:26.462 ======================================================== 00:07:26.462 Latency(us) 00:07:26.462 Device Information : IOPS MiB/s Average min max 00:07:26.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15935.74 62.25 4015.98 846.42 16546.04 00:07:26.462 ======================================================== 00:07:26.462 Total : 15935.74 62.25 4015.98 846.42 16546.04 00:07:26.462 00:07:26.462 17:18:03 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:26.462 17:18:03 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:26.462 17:18:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:26.462 17:18:03 -- nvmf/common.sh@116 -- # sync 00:07:26.462 17:18:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:26.462 17:18:03 -- nvmf/common.sh@119 -- # set +e 00:07:26.462 17:18:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:26.462 17:18:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:26.462 rmmod nvme_tcp 00:07:26.462 rmmod nvme_fabrics 00:07:26.462 rmmod nvme_keyring 00:07:26.462 17:18:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:26.462 17:18:03 -- nvmf/common.sh@123 -- # set -e 00:07:26.462 17:18:03 -- nvmf/common.sh@124 -- # return 0 00:07:26.462 17:18:03 -- nvmf/common.sh@477 -- # '[' -n 3944067 ']' 00:07:26.462 17:18:03 -- nvmf/common.sh@478 -- # killprocess 3944067 00:07:26.462 17:18:03 -- common/autotest_common.sh@926 -- # '[' -z 3944067 ']' 00:07:26.462 17:18:03 -- common/autotest_common.sh@930 -- # kill -0 3944067 00:07:26.462 17:18:03 -- common/autotest_common.sh@931 -- # uname 00:07:26.462 17:18:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:26.462 17:18:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3944067 00:07:26.462 17:18:03 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:26.462 17:18:03 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:26.462 17:18:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3944067' 00:07:26.462 killing process with pid 3944067 00:07:26.462 17:18:03 -- common/autotest_common.sh@945 -- # kill 3944067 00:07:26.462 17:18:03 -- common/autotest_common.sh@950 -- # wait 3944067 00:07:26.462 nvmf threads initialize successfully 00:07:26.462 bdev subsystem init successfully 00:07:26.462 created a nvmf target service 00:07:26.462 create targets's poll groups done 00:07:26.462 all subsystems of target started 00:07:26.462 nvmf target is running 00:07:26.462 all subsystems of target stopped 00:07:26.462 destroy targets's poll groups done 00:07:26.462 destroyed the nvmf target service 00:07:26.462 bdev subsystem finish successfully 00:07:26.462 nvmf threads destroy successfully 00:07:26.462 17:18:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:26.462 17:18:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:26.462 17:18:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:26.462 17:18:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.462 17:18:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:26.462 17:18:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.462 17:18:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.462 17:18:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.031 17:18:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:27.031 17:18:05 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:27.031 17:18:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:27.031 17:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:27.031 00:07:27.031 real 0m19.500s 00:07:27.031 user 0m46.770s 00:07:27.031 sys 0m5.535s 00:07:27.031 17:18:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.031 17:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:27.031 ************************************ 00:07:27.031 END TEST nvmf_example 00:07:27.031 ************************************ 00:07:27.031 17:18:05 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:27.031 17:18:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:27.031 17:18:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.031 17:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:27.031 ************************************ 00:07:27.031 START TEST nvmf_filesystem 00:07:27.031 ************************************ 00:07:27.031 17:18:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:27.031 * Looking for test storage... 00:07:27.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.031 17:18:05 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:27.031 17:18:05 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:27.031 17:18:05 -- common/autotest_common.sh@34 -- # set -e 00:07:27.031 17:18:05 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:27.031 17:18:05 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:27.031 17:18:05 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:27.031 17:18:05 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:27.031 17:18:05 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:27.031 17:18:05 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:27.031 17:18:05 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:27.031 17:18:05 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:27.031 17:18:05 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:27.031 17:18:05 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:27.031 17:18:05 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:27.031 17:18:05 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:27.031 17:18:05 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:27.031 17:18:05 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:27.031 17:18:05 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:27.031 17:18:05 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:27.031 17:18:05 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:27.031 17:18:05 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:27.031 17:18:05 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:27.031 17:18:05 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:27.031 17:18:05 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:27.031 17:18:05 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:27.031 17:18:05 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:27.031 17:18:05 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:27.031 17:18:05 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:27.031 17:18:05 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:27.031 17:18:05 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:27.031 17:18:05 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:27.031 17:18:05 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:27.031 17:18:05 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:27.031 17:18:05 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:27.031 17:18:05 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:27.031 17:18:05 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:27.031 17:18:05 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:27.031 17:18:05 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:27.031 17:18:05 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:27.031 17:18:05 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:27.031 17:18:05 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:27.031 17:18:05 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:27.031 17:18:05 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:27.031 17:18:05 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:27.031 17:18:05 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:27.031 17:18:05 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:27.031 17:18:05 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:27.031 17:18:05 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:27.031 17:18:05 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:27.031 17:18:05 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:27.031 17:18:05 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:27.031 17:18:05 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:27.031 17:18:05 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:27.031 17:18:05 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:27.031 17:18:05 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:27.031 17:18:05 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:27.031 17:18:05 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:27.031 17:18:05 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:27.031 17:18:05 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:27.031 17:18:05 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:27.031 17:18:05 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:27.031 17:18:05 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:27.031 17:18:05 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:27.031 17:18:05 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:27.031 17:18:05 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:27.031 17:18:05 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:27.031 17:18:05 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:27.031 17:18:05 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:27.032 17:18:05 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:27.032 17:18:05 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:27.032 17:18:05 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:27.032 17:18:05 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:27.032 17:18:05 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:27.032 17:18:05 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:27.032 17:18:05 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:27.032 17:18:05 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:27.032 17:18:05 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:27.032 17:18:05 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:27.032 17:18:05 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:27.032 17:18:05 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:27.032 17:18:05 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:27.032 17:18:05 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:27.032 17:18:05 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:27.032 17:18:05 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:27.032 17:18:05 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:27.032 17:18:05 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:27.032 17:18:05 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:27.032 17:18:05 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:27.032 17:18:05 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:27.032 17:18:05 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:27.032 17:18:05 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:27.032 17:18:05 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:27.032 17:18:05 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.032 17:18:05 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:27.032 17:18:05 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:27.032 17:18:05 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:27.032 17:18:05 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:27.032 17:18:05 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:27.032 17:18:05 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:27.032 17:18:05 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:27.032 17:18:05 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:27.032 17:18:05 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:27.032 #define SPDK_CONFIG_H 00:07:27.032 #define SPDK_CONFIG_APPS 1 00:07:27.032 #define SPDK_CONFIG_ARCH native 00:07:27.032 #undef SPDK_CONFIG_ASAN 00:07:27.032 #undef SPDK_CONFIG_AVAHI 00:07:27.032 #undef SPDK_CONFIG_CET 00:07:27.032 #define SPDK_CONFIG_COVERAGE 1 00:07:27.032 #define SPDK_CONFIG_CROSS_PREFIX 00:07:27.032 #undef SPDK_CONFIG_CRYPTO 00:07:27.032 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:27.032 #undef SPDK_CONFIG_CUSTOMOCF 00:07:27.032 #undef SPDK_CONFIG_DAOS 00:07:27.032 #define SPDK_CONFIG_DAOS_DIR 00:07:27.032 #define SPDK_CONFIG_DEBUG 1 00:07:27.032 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:27.032 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:27.032 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:27.032 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:27.032 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:27.032 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:27.032 #define SPDK_CONFIG_EXAMPLES 1 00:07:27.032 #undef SPDK_CONFIG_FC 00:07:27.032 #define SPDK_CONFIG_FC_PATH 00:07:27.032 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:27.032 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:27.032 #undef SPDK_CONFIG_FUSE 00:07:27.032 #undef SPDK_CONFIG_FUZZER 00:07:27.032 #define SPDK_CONFIG_FUZZER_LIB 00:07:27.032 #undef SPDK_CONFIG_GOLANG 00:07:27.032 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:27.032 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:27.032 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:27.032 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:27.032 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:27.032 #define SPDK_CONFIG_IDXD 1 00:07:27.032 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:27.032 #undef SPDK_CONFIG_IPSEC_MB 00:07:27.032 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:27.032 #define SPDK_CONFIG_ISAL 1 00:07:27.032 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:27.032 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:27.032 #define SPDK_CONFIG_LIBDIR 00:07:27.032 #undef SPDK_CONFIG_LTO 00:07:27.032 #define SPDK_CONFIG_MAX_LCORES 00:07:27.032 #define SPDK_CONFIG_NVME_CUSE 1 00:07:27.032 #undef SPDK_CONFIG_OCF 00:07:27.032 #define SPDK_CONFIG_OCF_PATH 00:07:27.032 #define SPDK_CONFIG_OPENSSL_PATH 00:07:27.032 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:27.032 #undef SPDK_CONFIG_PGO_USE 00:07:27.032 #define SPDK_CONFIG_PREFIX /usr/local 00:07:27.032 #undef SPDK_CONFIG_RAID5F 00:07:27.032 #undef SPDK_CONFIG_RBD 00:07:27.032 #define SPDK_CONFIG_RDMA 1 00:07:27.032 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:27.032 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:27.032 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:27.032 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:27.032 #define SPDK_CONFIG_SHARED 1 00:07:27.032 #undef SPDK_CONFIG_SMA 00:07:27.032 #define SPDK_CONFIG_TESTS 1 00:07:27.032 #undef SPDK_CONFIG_TSAN 00:07:27.032 #define SPDK_CONFIG_UBLK 1 00:07:27.032 #define SPDK_CONFIG_UBSAN 1 00:07:27.032 #undef SPDK_CONFIG_UNIT_TESTS 00:07:27.032 #undef SPDK_CONFIG_URING 00:07:27.032 #define SPDK_CONFIG_URING_PATH 00:07:27.032 #undef SPDK_CONFIG_URING_ZNS 00:07:27.032 #undef SPDK_CONFIG_USDT 00:07:27.032 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:27.032 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:27.032 #define SPDK_CONFIG_VFIO_USER 1 00:07:27.032 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:27.032 #define SPDK_CONFIG_VHOST 1 00:07:27.032 #define SPDK_CONFIG_VIRTIO 1 00:07:27.032 #undef SPDK_CONFIG_VTUNE 00:07:27.032 #define SPDK_CONFIG_VTUNE_DIR 00:07:27.032 #define SPDK_CONFIG_WERROR 1 00:07:27.032 #define SPDK_CONFIG_WPDK_DIR 00:07:27.032 #undef SPDK_CONFIG_XNVME 00:07:27.032 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:27.032 17:18:05 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:27.032 17:18:05 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.032 17:18:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.032 17:18:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.032 17:18:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.032 17:18:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.032 17:18:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.032 17:18:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.032 17:18:05 -- paths/export.sh@5 -- # export PATH 00:07:27.032 17:18:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.032 17:18:05 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:27.032 17:18:05 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:27.032 17:18:05 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:27.032 17:18:05 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:27.032 17:18:05 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:27.032 17:18:05 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:27.032 17:18:05 -- pm/common@16 -- # TEST_TAG=N/A 00:07:27.032 17:18:05 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:27.032 17:18:05 -- common/autotest_common.sh@52 -- # : 1 00:07:27.032 17:18:05 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:27.032 17:18:05 -- common/autotest_common.sh@56 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:27.032 17:18:05 -- common/autotest_common.sh@58 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:27.032 17:18:05 -- common/autotest_common.sh@60 -- # : 1 00:07:27.032 17:18:05 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:27.032 17:18:05 -- common/autotest_common.sh@62 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:27.032 17:18:05 -- common/autotest_common.sh@64 -- # : 00:07:27.032 17:18:05 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:27.032 17:18:05 -- common/autotest_common.sh@66 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:27.032 17:18:05 -- common/autotest_common.sh@68 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:27.032 17:18:05 -- common/autotest_common.sh@70 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:27.032 17:18:05 -- common/autotest_common.sh@72 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:27.032 17:18:05 -- common/autotest_common.sh@74 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:27.032 17:18:05 -- common/autotest_common.sh@76 -- # : 0 00:07:27.032 17:18:05 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:27.033 17:18:05 -- common/autotest_common.sh@78 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:27.033 17:18:05 -- common/autotest_common.sh@80 -- # : 1 00:07:27.033 17:18:05 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:27.033 17:18:05 -- common/autotest_common.sh@82 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:27.033 17:18:05 -- common/autotest_common.sh@84 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:27.033 17:18:05 -- common/autotest_common.sh@86 -- # : 1 00:07:27.033 17:18:05 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:27.033 17:18:05 -- common/autotest_common.sh@88 -- # : 1 00:07:27.033 17:18:05 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:27.033 17:18:05 -- common/autotest_common.sh@90 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:27.033 17:18:05 -- common/autotest_common.sh@92 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:27.033 17:18:05 -- common/autotest_common.sh@94 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:27.033 17:18:05 -- common/autotest_common.sh@96 -- # : tcp 00:07:27.033 17:18:05 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:27.033 17:18:05 -- common/autotest_common.sh@98 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:27.033 17:18:05 -- common/autotest_common.sh@100 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:27.033 17:18:05 -- common/autotest_common.sh@102 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:27.033 17:18:05 -- common/autotest_common.sh@104 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:27.033 17:18:05 -- common/autotest_common.sh@106 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:27.033 17:18:05 -- common/autotest_common.sh@108 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:27.033 17:18:05 -- common/autotest_common.sh@110 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:27.033 17:18:05 -- common/autotest_common.sh@112 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:27.033 17:18:05 -- common/autotest_common.sh@114 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:27.033 17:18:05 -- common/autotest_common.sh@116 -- # : 1 00:07:27.033 17:18:05 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:27.033 17:18:05 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:27.033 17:18:05 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:27.033 17:18:05 -- common/autotest_common.sh@120 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:27.033 17:18:05 -- common/autotest_common.sh@122 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:27.033 17:18:05 -- common/autotest_common.sh@124 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:27.033 17:18:05 -- common/autotest_common.sh@126 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:27.033 17:18:05 -- common/autotest_common.sh@128 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:27.033 17:18:05 -- common/autotest_common.sh@130 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:27.033 17:18:05 -- common/autotest_common.sh@132 -- # : v23.11 00:07:27.033 17:18:05 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:27.033 17:18:05 -- common/autotest_common.sh@134 -- # : true 00:07:27.033 17:18:05 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:27.033 17:18:05 -- common/autotest_common.sh@136 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:27.033 17:18:05 -- common/autotest_common.sh@138 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:27.033 17:18:05 -- common/autotest_common.sh@140 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:27.033 17:18:05 -- common/autotest_common.sh@142 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:27.033 17:18:05 -- common/autotest_common.sh@144 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:27.033 17:18:05 -- common/autotest_common.sh@146 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:27.033 17:18:05 -- common/autotest_common.sh@148 -- # : e810 00:07:27.033 17:18:05 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:27.033 17:18:05 -- common/autotest_common.sh@150 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:27.033 17:18:05 -- common/autotest_common.sh@152 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:27.033 17:18:05 -- common/autotest_common.sh@154 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:27.033 17:18:05 -- common/autotest_common.sh@156 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:27.033 17:18:05 -- common/autotest_common.sh@158 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:27.033 17:18:05 -- common/autotest_common.sh@160 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:27.033 17:18:05 -- common/autotest_common.sh@163 -- # : 00:07:27.033 17:18:05 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:27.033 17:18:05 -- common/autotest_common.sh@165 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:27.033 17:18:05 -- common/autotest_common.sh@167 -- # : 0 00:07:27.033 17:18:05 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:27.033 17:18:05 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:27.033 17:18:05 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:27.033 17:18:05 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:27.033 17:18:05 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:27.033 17:18:05 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:27.033 17:18:05 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:27.033 17:18:05 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:27.033 17:18:05 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:27.033 17:18:05 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:27.033 17:18:05 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:27.033 17:18:05 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:27.033 17:18:05 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:27.033 17:18:05 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:27.033 17:18:05 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:27.033 17:18:05 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:27.033 17:18:05 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:27.033 17:18:05 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:27.033 17:18:05 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:27.033 17:18:05 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:27.033 17:18:05 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:27.033 17:18:05 -- common/autotest_common.sh@196 -- # cat 00:07:27.033 17:18:05 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:27.033 17:18:05 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:27.033 17:18:05 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:27.033 17:18:05 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:27.033 17:18:05 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:27.033 17:18:05 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:27.033 17:18:05 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:27.033 17:18:05 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:27.033 17:18:05 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:27.033 17:18:05 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:27.033 17:18:05 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:27.033 17:18:05 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:27.034 17:18:05 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:27.034 17:18:05 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:27.034 17:18:05 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:27.034 17:18:05 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:27.034 17:18:05 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:27.034 17:18:05 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:27.034 17:18:05 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:27.034 17:18:05 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:27.034 17:18:05 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:27.034 17:18:05 -- common/autotest_common.sh@249 -- # valgrind= 00:07:27.034 17:18:05 -- common/autotest_common.sh@255 -- # uname -s 00:07:27.034 17:18:05 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:27.034 17:18:05 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:27.034 17:18:05 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:27.034 17:18:05 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:27.034 17:18:05 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:27.034 17:18:05 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:27.034 17:18:05 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:27.034 17:18:05 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j112 00:07:27.034 17:18:05 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:27.034 17:18:05 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:27.034 17:18:05 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:27.034 17:18:05 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:27.034 17:18:05 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:27.034 17:18:05 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:27.034 17:18:05 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:27.034 17:18:05 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:27.034 17:18:05 -- common/autotest_common.sh@309 -- # [[ -z 3946767 ]] 00:07:27.034 17:18:05 -- common/autotest_common.sh@309 -- # kill -0 3946767 00:07:27.034 17:18:05 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:27.034 17:18:05 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:27.034 17:18:05 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:27.034 17:18:05 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:27.034 17:18:05 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:27.034 17:18:05 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:27.034 17:18:05 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:27.034 17:18:05 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:27.034 17:18:05 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.qLTS1T 00:07:27.034 17:18:05 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:27.034 17:18:05 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:27.034 17:18:05 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:27.034 17:18:05 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.qLTS1T/tests/target /tmp/spdk.qLTS1T 00:07:27.034 17:18:05 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:27.034 17:18:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:27.034 17:18:05 -- common/autotest_common.sh@318 -- # df -T 00:07:27.034 17:18:05 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:27.034 17:18:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:27.034 17:18:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=954339328 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:27.034 17:18:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330090496 00:07:27.034 17:18:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=82383552512 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94501482496 00:07:27.034 17:18:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=12117929984 00:07:27.034 17:18:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=47197220864 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250739200 00:07:27.034 17:18:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:07:27.034 17:18:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=18890862592 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18900299776 00:07:27.034 17:18:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=9437184 00:07:27.034 17:18:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=47250001920 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250743296 00:07:27.034 17:18:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=741376 00:07:27.034 17:18:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450143744 00:07:27.034 17:18:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450147840 00:07:27.034 17:18:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:27.034 17:18:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:27.034 17:18:05 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:27.034 * Looking for test storage... 00:07:27.034 17:18:05 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:27.034 17:18:05 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:27.293 17:18:05 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.293 17:18:05 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:27.293 17:18:06 -- common/autotest_common.sh@363 -- # mount=/ 00:07:27.293 17:18:06 -- common/autotest_common.sh@365 -- # target_space=82383552512 00:07:27.293 17:18:06 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:27.293 17:18:06 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:27.293 17:18:06 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:27.293 17:18:06 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:27.293 17:18:06 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:27.293 17:18:06 -- common/autotest_common.sh@372 -- # new_size=14332522496 00:07:27.293 17:18:06 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:27.293 17:18:06 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.293 17:18:06 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.293 17:18:06 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.293 17:18:06 -- common/autotest_common.sh@380 -- # return 0 00:07:27.293 17:18:06 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:27.293 17:18:06 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:27.293 17:18:06 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:27.293 17:18:06 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:27.293 17:18:06 -- common/autotest_common.sh@1672 -- # true 00:07:27.293 17:18:06 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:27.293 17:18:06 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:27.293 17:18:06 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:27.293 17:18:06 -- common/autotest_common.sh@27 -- # exec 00:07:27.293 17:18:06 -- common/autotest_common.sh@29 -- # exec 00:07:27.293 17:18:06 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:27.293 17:18:06 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:27.293 17:18:06 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:27.293 17:18:06 -- common/autotest_common.sh@18 -- # set -x 00:07:27.293 17:18:06 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.293 17:18:06 -- nvmf/common.sh@7 -- # uname -s 00:07:27.293 17:18:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.293 17:18:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.293 17:18:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.293 17:18:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.293 17:18:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.293 17:18:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.293 17:18:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.293 17:18:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.293 17:18:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.293 17:18:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.293 17:18:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:27.293 17:18:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:27.293 17:18:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.293 17:18:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.293 17:18:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.293 17:18:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.293 17:18:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.293 17:18:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.293 17:18:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.293 17:18:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.293 17:18:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.293 17:18:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.293 17:18:06 -- paths/export.sh@5 -- # export PATH 00:07:27.294 17:18:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.294 17:18:06 -- nvmf/common.sh@46 -- # : 0 00:07:27.294 17:18:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:27.294 17:18:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:27.294 17:18:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:27.294 17:18:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.294 17:18:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.294 17:18:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:27.294 17:18:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:27.294 17:18:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:27.294 17:18:06 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:27.294 17:18:06 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:27.294 17:18:06 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:27.294 17:18:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:27.294 17:18:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.294 17:18:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:27.294 17:18:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:27.294 17:18:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:27.294 17:18:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.294 17:18:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.294 17:18:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.294 17:18:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:27.294 17:18:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:27.294 17:18:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:27.294 17:18:06 -- common/autotest_common.sh@10 -- # set +x 00:07:32.564 17:18:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:32.564 17:18:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:32.564 17:18:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:32.564 17:18:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:32.564 17:18:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:32.564 17:18:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:32.564 17:18:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:32.564 17:18:11 -- nvmf/common.sh@294 -- # net_devs=() 00:07:32.564 17:18:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:32.564 17:18:11 -- nvmf/common.sh@295 -- # e810=() 00:07:32.564 17:18:11 -- nvmf/common.sh@295 -- # local -ga e810 00:07:32.564 17:18:11 -- nvmf/common.sh@296 -- # x722=() 00:07:32.564 17:18:11 -- nvmf/common.sh@296 -- # local -ga x722 00:07:32.564 17:18:11 -- nvmf/common.sh@297 -- # mlx=() 00:07:32.564 17:18:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:32.564 17:18:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.564 17:18:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:32.564 17:18:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:32.564 17:18:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:32.564 17:18:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:32.564 17:18:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:32.564 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:32.564 17:18:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:32.564 17:18:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:32.564 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:32.564 17:18:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:32.564 17:18:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:32.564 17:18:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.564 17:18:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:32.564 17:18:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.564 17:18:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:32.564 Found net devices under 0000:af:00.0: cvl_0_0 00:07:32.564 17:18:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.564 17:18:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:32.564 17:18:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.564 17:18:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:32.564 17:18:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.564 17:18:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:32.564 Found net devices under 0000:af:00.1: cvl_0_1 00:07:32.564 17:18:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.564 17:18:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:32.564 17:18:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:32.564 17:18:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:32.564 17:18:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:32.564 17:18:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.564 17:18:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.564 17:18:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.564 17:18:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:32.564 17:18:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.564 17:18:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.564 17:18:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:32.564 17:18:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.564 17:18:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.564 17:18:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:32.564 17:18:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:32.564 17:18:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.824 17:18:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.824 17:18:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.824 17:18:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.824 17:18:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:32.824 17:18:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.824 17:18:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.824 17:18:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.824 17:18:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:32.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:32.824 00:07:32.824 --- 10.0.0.2 ping statistics --- 00:07:32.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.824 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:32.824 17:18:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:07:32.824 00:07:32.824 --- 10.0.0.1 ping statistics --- 00:07:32.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.824 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:07:32.824 17:18:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.824 17:18:11 -- nvmf/common.sh@410 -- # return 0 00:07:32.825 17:18:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:32.825 17:18:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.825 17:18:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:32.825 17:18:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:32.825 17:18:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.825 17:18:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:32.825 17:18:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:33.084 17:18:11 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:33.085 17:18:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:33.085 17:18:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.085 17:18:11 -- common/autotest_common.sh@10 -- # set +x 00:07:33.085 ************************************ 00:07:33.085 START TEST nvmf_filesystem_no_in_capsule 00:07:33.085 ************************************ 00:07:33.085 17:18:11 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:33.085 17:18:11 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:33.085 17:18:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:33.085 17:18:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:33.085 17:18:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:33.085 17:18:11 -- common/autotest_common.sh@10 -- # set +x 00:07:33.085 17:18:11 -- nvmf/common.sh@469 -- # nvmfpid=3950327 00:07:33.085 17:18:11 -- nvmf/common.sh@470 -- # waitforlisten 3950327 00:07:33.085 17:18:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.085 17:18:11 -- common/autotest_common.sh@819 -- # '[' -z 3950327 ']' 00:07:33.085 17:18:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.085 17:18:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:33.085 17:18:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.085 17:18:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:33.085 17:18:11 -- common/autotest_common.sh@10 -- # set +x 00:07:33.085 [2024-07-12 17:18:11.875319] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:33.085 [2024-07-12 17:18:11.875374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.085 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.085 [2024-07-12 17:18:11.963393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.085 [2024-07-12 17:18:12.007859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:33.085 [2024-07-12 17:18:12.008010] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.085 [2024-07-12 17:18:12.008022] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.085 [2024-07-12 17:18:12.008031] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.085 [2024-07-12 17:18:12.008081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.085 [2024-07-12 17:18:12.008207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.085 [2024-07-12 17:18:12.008287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.085 [2024-07-12 17:18:12.008289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.020 17:18:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:34.020 17:18:12 -- common/autotest_common.sh@852 -- # return 0 00:07:34.020 17:18:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:34.020 17:18:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:34.020 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:07:34.020 17:18:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.020 17:18:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:34.020 17:18:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:34.020 17:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:34.020 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:07:34.020 [2024-07-12 17:18:12.859134] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.020 17:18:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:34.020 17:18:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:34.020 17:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:34.020 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:07:34.277 Malloc1 00:07:34.277 17:18:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:34.277 17:18:12 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:34.277 17:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:34.277 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:07:34.277 17:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:34.277 17:18:13 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.277 17:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:34.277 17:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.277 17:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:34.277 17:18:13 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.277 17:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:34.277 17:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.277 [2024-07-12 17:18:13.015749] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.277 17:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:34.277 17:18:13 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:34.277 17:18:13 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:34.277 17:18:13 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:34.277 17:18:13 -- common/autotest_common.sh@1359 -- # local bs 00:07:34.277 17:18:13 -- common/autotest_common.sh@1360 -- # local nb 00:07:34.277 17:18:13 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:34.277 17:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:34.277 17:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.277 17:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:34.277 17:18:13 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:34.277 { 00:07:34.277 "name": "Malloc1", 00:07:34.277 "aliases": [ 00:07:34.277 "a663f479-e8f4-4734-805c-1297bba78d5d" 00:07:34.277 ], 00:07:34.277 "product_name": "Malloc disk", 00:07:34.277 "block_size": 512, 00:07:34.277 "num_blocks": 1048576, 00:07:34.277 "uuid": "a663f479-e8f4-4734-805c-1297bba78d5d", 00:07:34.277 "assigned_rate_limits": { 00:07:34.277 "rw_ios_per_sec": 0, 00:07:34.277 "rw_mbytes_per_sec": 0, 00:07:34.277 "r_mbytes_per_sec": 0, 00:07:34.277 "w_mbytes_per_sec": 0 00:07:34.277 }, 00:07:34.277 "claimed": true, 00:07:34.277 "claim_type": "exclusive_write", 00:07:34.277 "zoned": false, 00:07:34.277 "supported_io_types": { 00:07:34.277 "read": true, 00:07:34.277 "write": true, 00:07:34.277 "unmap": true, 00:07:34.277 "write_zeroes": true, 00:07:34.277 "flush": true, 00:07:34.277 "reset": true, 00:07:34.278 "compare": false, 00:07:34.278 "compare_and_write": false, 00:07:34.278 "abort": true, 00:07:34.278 "nvme_admin": false, 00:07:34.278 "nvme_io": false 00:07:34.278 }, 00:07:34.278 "memory_domains": [ 00:07:34.278 { 00:07:34.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.278 "dma_device_type": 2 00:07:34.278 } 00:07:34.278 ], 00:07:34.278 "driver_specific": {} 00:07:34.278 } 00:07:34.278 ]' 00:07:34.278 17:18:13 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:34.278 17:18:13 -- common/autotest_common.sh@1362 -- # bs=512 00:07:34.278 17:18:13 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:34.278 17:18:13 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:34.278 17:18:13 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:34.278 17:18:13 -- common/autotest_common.sh@1367 -- # echo 512 00:07:34.278 17:18:13 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:34.278 17:18:13 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.656 17:18:14 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.656 17:18:14 -- common/autotest_common.sh@1177 -- # local i=0 00:07:35.656 17:18:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.656 17:18:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:35.656 17:18:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:37.560 17:18:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:37.560 17:18:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:37.560 17:18:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.560 17:18:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:37.560 17:18:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.560 17:18:16 -- common/autotest_common.sh@1187 -- # return 0 00:07:37.560 17:18:16 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:37.560 17:18:16 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:37.560 17:18:16 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:37.560 17:18:16 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:37.560 17:18:16 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:37.560 17:18:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:37.560 17:18:16 -- setup/common.sh@80 -- # echo 536870912 00:07:37.560 17:18:16 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:37.560 17:18:16 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:37.560 17:18:16 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:37.560 17:18:16 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:37.819 17:18:16 -- target/filesystem.sh@69 -- # partprobe 00:07:38.387 17:18:17 -- target/filesystem.sh@70 -- # sleep 1 00:07:39.325 17:18:18 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:39.325 17:18:18 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:39.325 17:18:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:39.325 17:18:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.325 17:18:18 -- common/autotest_common.sh@10 -- # set +x 00:07:39.325 ************************************ 00:07:39.325 START TEST filesystem_ext4 00:07:39.325 ************************************ 00:07:39.325 17:18:18 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:39.325 17:18:18 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:39.325 17:18:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.325 17:18:18 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:39.325 17:18:18 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:39.325 17:18:18 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:39.325 17:18:18 -- common/autotest_common.sh@904 -- # local i=0 00:07:39.325 17:18:18 -- common/autotest_common.sh@905 -- # local force 00:07:39.325 17:18:18 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:39.325 17:18:18 -- common/autotest_common.sh@908 -- # force=-F 00:07:39.325 17:18:18 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:39.325 mke2fs 1.46.5 (30-Dec-2021) 00:07:39.325 Discarding device blocks: 0/522240 done 00:07:39.325 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:39.325 Filesystem UUID: 3147c14b-91b9-485b-9dd3-24cf751c883f 00:07:39.325 Superblock backups stored on blocks: 00:07:39.325 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:39.325 00:07:39.325 Allocating group tables: 0/64 done 00:07:39.325 Writing inode tables: 0/64 done 00:07:39.583 Creating journal (8192 blocks): done 00:07:40.409 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:07:40.409 00:07:40.409 17:18:19 -- common/autotest_common.sh@921 -- # return 0 00:07:40.409 17:18:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.668 17:18:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.668 17:18:19 -- target/filesystem.sh@25 -- # sync 00:07:40.668 17:18:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:40.668 17:18:19 -- target/filesystem.sh@27 -- # sync 00:07:40.668 17:18:19 -- target/filesystem.sh@29 -- # i=0 00:07:40.668 17:18:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.668 17:18:19 -- target/filesystem.sh@37 -- # kill -0 3950327 00:07:40.668 17:18:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.668 17:18:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.668 17:18:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.668 17:18:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:40.668 00:07:40.668 real 0m1.434s 00:07:40.668 user 0m0.031s 00:07:40.668 sys 0m0.059s 00:07:40.668 17:18:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.668 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:40.668 ************************************ 00:07:40.668 END TEST filesystem_ext4 00:07:40.668 ************************************ 00:07:40.668 17:18:19 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:40.668 17:18:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:40.668 17:18:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.668 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:40.668 ************************************ 00:07:40.668 START TEST filesystem_btrfs 00:07:40.668 ************************************ 00:07:40.668 17:18:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:40.668 17:18:19 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:40.668 17:18:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.668 17:18:19 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:40.668 17:18:19 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:40.668 17:18:19 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:40.668 17:18:19 -- common/autotest_common.sh@904 -- # local i=0 00:07:40.668 17:18:19 -- common/autotest_common.sh@905 -- # local force 00:07:40.668 17:18:19 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:40.668 17:18:19 -- common/autotest_common.sh@910 -- # force=-f 00:07:40.668 17:18:19 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:40.928 btrfs-progs v6.6.2 00:07:40.928 See https://btrfs.readthedocs.io for more information. 00:07:40.928 00:07:40.928 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:40.928 NOTE: several default settings have changed in version 5.15, please make sure 00:07:40.928 this does not affect your deployments: 00:07:40.928 - DUP for metadata (-m dup) 00:07:40.928 - enabled no-holes (-O no-holes) 00:07:40.928 - enabled free-space-tree (-R free-space-tree) 00:07:40.928 00:07:40.928 Label: (null) 00:07:40.928 UUID: 66c08d42-8155-4efe-a0dc-c40ad6f255bb 00:07:40.928 Node size: 16384 00:07:40.928 Sector size: 4096 00:07:40.928 Filesystem size: 510.00MiB 00:07:40.928 Block group profiles: 00:07:40.928 Data: single 8.00MiB 00:07:40.928 Metadata: DUP 32.00MiB 00:07:40.928 System: DUP 8.00MiB 00:07:40.928 SSD detected: yes 00:07:40.928 Zoned device: no 00:07:40.928 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:40.928 Runtime features: free-space-tree 00:07:40.928 Checksum: crc32c 00:07:40.928 Number of devices: 1 00:07:40.928 Devices: 00:07:40.928 ID SIZE PATH 00:07:40.928 1 510.00MiB /dev/nvme0n1p1 00:07:40.928 00:07:40.928 17:18:19 -- common/autotest_common.sh@921 -- # return 0 00:07:40.928 17:18:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.497 17:18:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.497 17:18:20 -- target/filesystem.sh@25 -- # sync 00:07:41.497 17:18:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.497 17:18:20 -- target/filesystem.sh@27 -- # sync 00:07:41.497 17:18:20 -- target/filesystem.sh@29 -- # i=0 00:07:41.497 17:18:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.497 17:18:20 -- target/filesystem.sh@37 -- # kill -0 3950327 00:07:41.497 17:18:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.497 17:18:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.497 17:18:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.497 17:18:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.497 00:07:41.497 real 0m0.818s 00:07:41.497 user 0m0.029s 00:07:41.497 sys 0m0.124s 00:07:41.497 17:18:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.497 17:18:20 -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 ************************************ 00:07:41.497 END TEST filesystem_btrfs 00:07:41.497 ************************************ 00:07:41.497 17:18:20 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:41.497 17:18:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:41.497 17:18:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.497 17:18:20 -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 ************************************ 00:07:41.497 START TEST filesystem_xfs 00:07:41.497 ************************************ 00:07:41.755 17:18:20 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:41.755 17:18:20 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:41.755 17:18:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.755 17:18:20 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:41.755 17:18:20 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:41.755 17:18:20 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:41.755 17:18:20 -- common/autotest_common.sh@904 -- # local i=0 00:07:41.755 17:18:20 -- common/autotest_common.sh@905 -- # local force 00:07:41.755 17:18:20 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:41.756 17:18:20 -- common/autotest_common.sh@910 -- # force=-f 00:07:41.756 17:18:20 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:41.756 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:41.756 = sectsz=512 attr=2, projid32bit=1 00:07:41.756 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:41.756 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:41.756 data = bsize=4096 blocks=130560, imaxpct=25 00:07:41.756 = sunit=0 swidth=0 blks 00:07:41.756 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:41.756 log =internal log bsize=4096 blocks=16384, version=2 00:07:41.756 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:41.756 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:42.692 Discarding blocks...Done. 00:07:42.692 17:18:21 -- common/autotest_common.sh@921 -- # return 0 00:07:42.692 17:18:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.597 17:18:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.597 17:18:23 -- target/filesystem.sh@25 -- # sync 00:07:44.597 17:18:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.597 17:18:23 -- target/filesystem.sh@27 -- # sync 00:07:44.597 17:18:23 -- target/filesystem.sh@29 -- # i=0 00:07:44.597 17:18:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.597 17:18:23 -- target/filesystem.sh@37 -- # kill -0 3950327 00:07:44.597 17:18:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.597 17:18:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.597 17:18:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.597 17:18:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.597 00:07:44.597 real 0m2.862s 00:07:44.597 user 0m0.030s 00:07:44.597 sys 0m0.064s 00:07:44.597 17:18:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.597 17:18:23 -- common/autotest_common.sh@10 -- # set +x 00:07:44.597 ************************************ 00:07:44.597 END TEST filesystem_xfs 00:07:44.597 ************************************ 00:07:44.597 17:18:23 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:44.856 17:18:23 -- target/filesystem.sh@93 -- # sync 00:07:44.856 17:18:23 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:44.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.856 17:18:23 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:44.856 17:18:23 -- common/autotest_common.sh@1198 -- # local i=0 00:07:44.856 17:18:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:44.856 17:18:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.856 17:18:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:44.856 17:18:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.856 17:18:23 -- common/autotest_common.sh@1210 -- # return 0 00:07:44.856 17:18:23 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.856 17:18:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.856 17:18:23 -- common/autotest_common.sh@10 -- # set +x 00:07:44.856 17:18:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.856 17:18:23 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:44.856 17:18:23 -- target/filesystem.sh@101 -- # killprocess 3950327 00:07:44.856 17:18:23 -- common/autotest_common.sh@926 -- # '[' -z 3950327 ']' 00:07:44.856 17:18:23 -- common/autotest_common.sh@930 -- # kill -0 3950327 00:07:44.856 17:18:23 -- common/autotest_common.sh@931 -- # uname 00:07:44.856 17:18:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:44.856 17:18:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3950327 00:07:44.856 17:18:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:44.856 17:18:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:44.856 17:18:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3950327' 00:07:44.856 killing process with pid 3950327 00:07:44.856 17:18:23 -- common/autotest_common.sh@945 -- # kill 3950327 00:07:44.856 17:18:23 -- common/autotest_common.sh@950 -- # wait 3950327 00:07:45.425 17:18:24 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:45.425 00:07:45.425 real 0m12.322s 00:07:45.425 user 0m48.420s 00:07:45.425 sys 0m1.248s 00:07:45.425 17:18:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.425 17:18:24 -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 ************************************ 00:07:45.425 END TEST nvmf_filesystem_no_in_capsule 00:07:45.425 ************************************ 00:07:45.425 17:18:24 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:45.425 17:18:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:45.425 17:18:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.425 17:18:24 -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 ************************************ 00:07:45.425 START TEST nvmf_filesystem_in_capsule 00:07:45.425 ************************************ 00:07:45.425 17:18:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:45.425 17:18:24 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:45.425 17:18:24 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:45.425 17:18:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:45.425 17:18:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:45.425 17:18:24 -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 17:18:24 -- nvmf/common.sh@469 -- # nvmfpid=3952930 00:07:45.425 17:18:24 -- nvmf/common.sh@470 -- # waitforlisten 3952930 00:07:45.425 17:18:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.425 17:18:24 -- common/autotest_common.sh@819 -- # '[' -z 3952930 ']' 00:07:45.425 17:18:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.425 17:18:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:45.425 17:18:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.425 17:18:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:45.425 17:18:24 -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 [2024-07-12 17:18:24.236931] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:45.425 [2024-07-12 17:18:24.236990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.425 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.425 [2024-07-12 17:18:24.323377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.425 [2024-07-12 17:18:24.366071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.426 [2024-07-12 17:18:24.366216] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.426 [2024-07-12 17:18:24.366227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.426 [2024-07-12 17:18:24.366242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.426 [2024-07-12 17:18:24.366284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.426 [2024-07-12 17:18:24.366387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.426 [2024-07-12 17:18:24.366477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.426 [2024-07-12 17:18:24.366480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.457 17:18:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:46.457 17:18:25 -- common/autotest_common.sh@852 -- # return 0 00:07:46.457 17:18:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:46.457 17:18:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:46.457 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.457 17:18:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.457 17:18:25 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:46.457 17:18:25 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:46.457 17:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.457 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.457 [2024-07-12 17:18:25.216269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.457 17:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.457 17:18:25 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:46.457 17:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.457 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.457 Malloc1 00:07:46.457 17:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.457 17:18:25 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:46.457 17:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.457 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.457 17:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.457 17:18:25 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:46.457 17:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.457 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.457 17:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.457 17:18:25 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.457 17:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.457 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.457 [2024-07-12 17:18:25.372165] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.457 17:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.457 17:18:25 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:46.457 17:18:25 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:46.457 17:18:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:46.457 17:18:25 -- common/autotest_common.sh@1359 -- # local bs 00:07:46.457 17:18:25 -- common/autotest_common.sh@1360 -- # local nb 00:07:46.457 17:18:25 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:46.457 17:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.457 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.457 17:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.457 17:18:25 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:46.457 { 00:07:46.457 "name": "Malloc1", 00:07:46.457 "aliases": [ 00:07:46.457 "cfbe104f-3a00-4ddd-8745-f453eb5d8df2" 00:07:46.457 ], 00:07:46.457 "product_name": "Malloc disk", 00:07:46.457 "block_size": 512, 00:07:46.457 "num_blocks": 1048576, 00:07:46.457 "uuid": "cfbe104f-3a00-4ddd-8745-f453eb5d8df2", 00:07:46.457 "assigned_rate_limits": { 00:07:46.457 "rw_ios_per_sec": 0, 00:07:46.457 "rw_mbytes_per_sec": 0, 00:07:46.457 "r_mbytes_per_sec": 0, 00:07:46.457 "w_mbytes_per_sec": 0 00:07:46.457 }, 00:07:46.457 "claimed": true, 00:07:46.457 "claim_type": "exclusive_write", 00:07:46.457 "zoned": false, 00:07:46.457 "supported_io_types": { 00:07:46.457 "read": true, 00:07:46.457 "write": true, 00:07:46.457 "unmap": true, 00:07:46.457 "write_zeroes": true, 00:07:46.457 "flush": true, 00:07:46.457 "reset": true, 00:07:46.457 "compare": false, 00:07:46.457 "compare_and_write": false, 00:07:46.457 "abort": true, 00:07:46.457 "nvme_admin": false, 00:07:46.457 "nvme_io": false 00:07:46.457 }, 00:07:46.457 "memory_domains": [ 00:07:46.457 { 00:07:46.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.457 "dma_device_type": 2 00:07:46.457 } 00:07:46.457 ], 00:07:46.457 "driver_specific": {} 00:07:46.457 } 00:07:46.457 ]' 00:07:46.457 17:18:25 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:46.741 17:18:25 -- common/autotest_common.sh@1362 -- # bs=512 00:07:46.741 17:18:25 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:46.741 17:18:25 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:46.741 17:18:25 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:46.741 17:18:25 -- common/autotest_common.sh@1367 -- # echo 512 00:07:46.741 17:18:25 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:46.741 17:18:25 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.115 17:18:26 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:48.115 17:18:26 -- common/autotest_common.sh@1177 -- # local i=0 00:07:48.115 17:18:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:48.115 17:18:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:48.115 17:18:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:50.020 17:18:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:50.020 17:18:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:50.020 17:18:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:50.020 17:18:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:50.020 17:18:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:50.020 17:18:28 -- common/autotest_common.sh@1187 -- # return 0 00:07:50.020 17:18:28 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:50.020 17:18:28 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:50.020 17:18:28 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:50.020 17:18:28 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:50.020 17:18:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:50.020 17:18:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:50.020 17:18:28 -- setup/common.sh@80 -- # echo 536870912 00:07:50.020 17:18:28 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:50.020 17:18:28 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:50.020 17:18:28 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:50.020 17:18:28 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:50.279 17:18:29 -- target/filesystem.sh@69 -- # partprobe 00:07:50.538 17:18:29 -- target/filesystem.sh@70 -- # sleep 1 00:07:51.474 17:18:30 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:51.474 17:18:30 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:51.474 17:18:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:51.474 17:18:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.474 17:18:30 -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 ************************************ 00:07:51.474 START TEST filesystem_in_capsule_ext4 00:07:51.474 ************************************ 00:07:51.474 17:18:30 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:51.475 17:18:30 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:51.475 17:18:30 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.475 17:18:30 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:51.475 17:18:30 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:51.475 17:18:30 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:51.475 17:18:30 -- common/autotest_common.sh@904 -- # local i=0 00:07:51.475 17:18:30 -- common/autotest_common.sh@905 -- # local force 00:07:51.475 17:18:30 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:51.475 17:18:30 -- common/autotest_common.sh@908 -- # force=-F 00:07:51.475 17:18:30 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:51.475 mke2fs 1.46.5 (30-Dec-2021) 00:07:51.734 Discarding device blocks: 0/522240 done 00:07:51.734 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:51.734 Filesystem UUID: 8b3ece6b-1ef6-436f-b756-e793e07d10cc 00:07:51.734 Superblock backups stored on blocks: 00:07:51.734 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:51.734 00:07:51.734 Allocating group tables: 0/64 done 00:07:51.734 Writing inode tables: 0/64 done 00:07:52.301 Creating journal (8192 blocks): done 00:07:53.237 Writing superblocks and filesystem accounting information: 0/64 done 00:07:53.237 00:07:53.237 17:18:31 -- common/autotest_common.sh@921 -- # return 0 00:07:53.237 17:18:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.237 17:18:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.237 17:18:32 -- target/filesystem.sh@25 -- # sync 00:07:53.237 17:18:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.496 17:18:32 -- target/filesystem.sh@27 -- # sync 00:07:53.496 17:18:32 -- target/filesystem.sh@29 -- # i=0 00:07:53.496 17:18:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.496 17:18:32 -- target/filesystem.sh@37 -- # kill -0 3952930 00:07:53.496 17:18:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.496 17:18:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.496 17:18:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.496 17:18:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.496 00:07:53.496 real 0m1.818s 00:07:53.496 user 0m0.028s 00:07:53.496 sys 0m0.062s 00:07:53.496 17:18:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.496 17:18:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.496 ************************************ 00:07:53.496 END TEST filesystem_in_capsule_ext4 00:07:53.496 ************************************ 00:07:53.496 17:18:32 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:53.496 17:18:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:53.496 17:18:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.496 17:18:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.496 ************************************ 00:07:53.496 START TEST filesystem_in_capsule_btrfs 00:07:53.496 ************************************ 00:07:53.496 17:18:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:53.496 17:18:32 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:53.496 17:18:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.496 17:18:32 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:53.496 17:18:32 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:53.496 17:18:32 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:53.496 17:18:32 -- common/autotest_common.sh@904 -- # local i=0 00:07:53.496 17:18:32 -- common/autotest_common.sh@905 -- # local force 00:07:53.496 17:18:32 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:53.496 17:18:32 -- common/autotest_common.sh@910 -- # force=-f 00:07:53.496 17:18:32 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:53.755 btrfs-progs v6.6.2 00:07:53.755 See https://btrfs.readthedocs.io for more information. 00:07:53.755 00:07:53.755 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:53.755 NOTE: several default settings have changed in version 5.15, please make sure 00:07:53.755 this does not affect your deployments: 00:07:53.755 - DUP for metadata (-m dup) 00:07:53.755 - enabled no-holes (-O no-holes) 00:07:53.755 - enabled free-space-tree (-R free-space-tree) 00:07:53.755 00:07:53.755 Label: (null) 00:07:53.755 UUID: 52ac3bd4-35d6-4353-9f50-661983a31d71 00:07:53.755 Node size: 16384 00:07:53.755 Sector size: 4096 00:07:53.755 Filesystem size: 510.00MiB 00:07:53.755 Block group profiles: 00:07:53.755 Data: single 8.00MiB 00:07:53.755 Metadata: DUP 32.00MiB 00:07:53.755 System: DUP 8.00MiB 00:07:53.755 SSD detected: yes 00:07:53.755 Zoned device: no 00:07:53.755 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:53.755 Runtime features: free-space-tree 00:07:53.755 Checksum: crc32c 00:07:53.755 Number of devices: 1 00:07:53.755 Devices: 00:07:53.755 ID SIZE PATH 00:07:53.755 1 510.00MiB /dev/nvme0n1p1 00:07:53.755 00:07:53.755 17:18:32 -- common/autotest_common.sh@921 -- # return 0 00:07:53.755 17:18:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.692 17:18:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.692 17:18:33 -- target/filesystem.sh@25 -- # sync 00:07:54.692 17:18:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.692 17:18:33 -- target/filesystem.sh@27 -- # sync 00:07:54.692 17:18:33 -- target/filesystem.sh@29 -- # i=0 00:07:54.692 17:18:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.692 17:18:33 -- target/filesystem.sh@37 -- # kill -0 3952930 00:07:54.692 17:18:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.692 17:18:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.692 17:18:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.692 17:18:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.692 00:07:54.692 real 0m1.176s 00:07:54.692 user 0m0.025s 00:07:54.692 sys 0m0.130s 00:07:54.692 17:18:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.692 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:07:54.692 ************************************ 00:07:54.692 END TEST filesystem_in_capsule_btrfs 00:07:54.692 ************************************ 00:07:54.692 17:18:33 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:54.692 17:18:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:54.692 17:18:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.692 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:07:54.692 ************************************ 00:07:54.692 START TEST filesystem_in_capsule_xfs 00:07:54.692 ************************************ 00:07:54.692 17:18:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:54.692 17:18:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:54.692 17:18:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.692 17:18:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:54.692 17:18:33 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:54.692 17:18:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:54.692 17:18:33 -- common/autotest_common.sh@904 -- # local i=0 00:07:54.692 17:18:33 -- common/autotest_common.sh@905 -- # local force 00:07:54.692 17:18:33 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:54.692 17:18:33 -- common/autotest_common.sh@910 -- # force=-f 00:07:54.692 17:18:33 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:54.692 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:54.692 = sectsz=512 attr=2, projid32bit=1 00:07:54.692 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:54.692 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:54.692 data = bsize=4096 blocks=130560, imaxpct=25 00:07:54.692 = sunit=0 swidth=0 blks 00:07:54.692 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:54.692 log =internal log bsize=4096 blocks=16384, version=2 00:07:54.692 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:54.692 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:55.629 Discarding blocks...Done. 00:07:55.629 17:18:34 -- common/autotest_common.sh@921 -- # return 0 00:07:55.629 17:18:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.536 17:18:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.536 17:18:36 -- target/filesystem.sh@25 -- # sync 00:07:57.536 17:18:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.536 17:18:36 -- target/filesystem.sh@27 -- # sync 00:07:57.536 17:18:36 -- target/filesystem.sh@29 -- # i=0 00:07:57.536 17:18:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.536 17:18:36 -- target/filesystem.sh@37 -- # kill -0 3952930 00:07:57.536 17:18:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.536 17:18:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.536 17:18:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.536 17:18:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.536 00:07:57.536 real 0m2.821s 00:07:57.536 user 0m0.023s 00:07:57.536 sys 0m0.072s 00:07:57.536 17:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.536 17:18:36 -- common/autotest_common.sh@10 -- # set +x 00:07:57.536 ************************************ 00:07:57.536 END TEST filesystem_in_capsule_xfs 00:07:57.536 ************************************ 00:07:57.536 17:18:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:57.536 17:18:36 -- target/filesystem.sh@93 -- # sync 00:07:57.536 17:18:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.536 17:18:36 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.536 17:18:36 -- common/autotest_common.sh@1198 -- # local i=0 00:07:57.536 17:18:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:57.536 17:18:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.794 17:18:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:57.794 17:18:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.795 17:18:36 -- common/autotest_common.sh@1210 -- # return 0 00:07:57.795 17:18:36 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.795 17:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.795 17:18:36 -- common/autotest_common.sh@10 -- # set +x 00:07:57.795 17:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.795 17:18:36 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:57.795 17:18:36 -- target/filesystem.sh@101 -- # killprocess 3952930 00:07:57.795 17:18:36 -- common/autotest_common.sh@926 -- # '[' -z 3952930 ']' 00:07:57.795 17:18:36 -- common/autotest_common.sh@930 -- # kill -0 3952930 00:07:57.795 17:18:36 -- common/autotest_common.sh@931 -- # uname 00:07:57.795 17:18:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.795 17:18:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3952930 00:07:57.795 17:18:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.795 17:18:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.795 17:18:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3952930' 00:07:57.795 killing process with pid 3952930 00:07:57.795 17:18:36 -- common/autotest_common.sh@945 -- # kill 3952930 00:07:57.795 17:18:36 -- common/autotest_common.sh@950 -- # wait 3952930 00:07:58.054 17:18:36 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:58.054 00:07:58.054 real 0m12.751s 00:07:58.054 user 0m50.148s 00:07:58.054 sys 0m1.243s 00:07:58.054 17:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.054 17:18:36 -- common/autotest_common.sh@10 -- # set +x 00:07:58.054 ************************************ 00:07:58.054 END TEST nvmf_filesystem_in_capsule 00:07:58.054 ************************************ 00:07:58.054 17:18:36 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:58.054 17:18:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:58.054 17:18:36 -- nvmf/common.sh@116 -- # sync 00:07:58.054 17:18:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:58.054 17:18:36 -- nvmf/common.sh@119 -- # set +e 00:07:58.054 17:18:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:58.054 17:18:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:58.054 rmmod nvme_tcp 00:07:58.054 rmmod nvme_fabrics 00:07:58.054 rmmod nvme_keyring 00:07:58.313 17:18:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:58.313 17:18:37 -- nvmf/common.sh@123 -- # set -e 00:07:58.313 17:18:37 -- nvmf/common.sh@124 -- # return 0 00:07:58.313 17:18:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:58.313 17:18:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:58.313 17:18:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:58.313 17:18:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:58.313 17:18:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.313 17:18:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:58.313 17:18:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.313 17:18:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.313 17:18:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.216 17:18:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:00.216 00:08:00.216 real 0m33.322s 00:08:00.216 user 1m40.312s 00:08:00.216 sys 0m6.955s 00:08:00.216 17:18:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.216 17:18:39 -- common/autotest_common.sh@10 -- # set +x 00:08:00.216 ************************************ 00:08:00.216 END TEST nvmf_filesystem 00:08:00.216 ************************************ 00:08:00.216 17:18:39 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.216 17:18:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:00.216 17:18:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.216 17:18:39 -- common/autotest_common.sh@10 -- # set +x 00:08:00.216 ************************************ 00:08:00.216 START TEST nvmf_discovery 00:08:00.216 ************************************ 00:08:00.216 17:18:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.475 * Looking for test storage... 00:08:00.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.475 17:18:39 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.475 17:18:39 -- nvmf/common.sh@7 -- # uname -s 00:08:00.475 17:18:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.475 17:18:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.475 17:18:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.475 17:18:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.475 17:18:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.475 17:18:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.475 17:18:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.475 17:18:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.475 17:18:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.475 17:18:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.475 17:18:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:00.475 17:18:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:00.475 17:18:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.475 17:18:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.475 17:18:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.475 17:18:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.475 17:18:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.475 17:18:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.475 17:18:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.475 17:18:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.475 17:18:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.475 17:18:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.475 17:18:39 -- paths/export.sh@5 -- # export PATH 00:08:00.475 17:18:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.475 17:18:39 -- nvmf/common.sh@46 -- # : 0 00:08:00.475 17:18:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.475 17:18:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.476 17:18:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.476 17:18:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.476 17:18:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.476 17:18:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.476 17:18:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.476 17:18:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.476 17:18:39 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:00.476 17:18:39 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:00.476 17:18:39 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:00.476 17:18:39 -- target/discovery.sh@15 -- # hash nvme 00:08:00.476 17:18:39 -- target/discovery.sh@20 -- # nvmftestinit 00:08:00.476 17:18:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:00.476 17:18:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.476 17:18:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:00.476 17:18:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:00.476 17:18:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:00.476 17:18:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.476 17:18:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.476 17:18:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.476 17:18:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:00.476 17:18:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:00.476 17:18:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:00.476 17:18:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.030 17:18:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:07.030 17:18:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:07.030 17:18:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:07.030 17:18:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:07.030 17:18:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:07.030 17:18:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:07.030 17:18:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:07.030 17:18:44 -- nvmf/common.sh@294 -- # net_devs=() 00:08:07.030 17:18:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:07.030 17:18:44 -- nvmf/common.sh@295 -- # e810=() 00:08:07.030 17:18:44 -- nvmf/common.sh@295 -- # local -ga e810 00:08:07.030 17:18:44 -- nvmf/common.sh@296 -- # x722=() 00:08:07.030 17:18:44 -- nvmf/common.sh@296 -- # local -ga x722 00:08:07.030 17:18:44 -- nvmf/common.sh@297 -- # mlx=() 00:08:07.030 17:18:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:07.030 17:18:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.030 17:18:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:07.030 17:18:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:07.030 17:18:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:07.030 17:18:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:07.030 17:18:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:07.030 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:07.030 17:18:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:07.030 17:18:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:07.030 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:07.030 17:18:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:07.030 17:18:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:07.031 17:18:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:07.031 17:18:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:07.031 17:18:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:07.031 17:18:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.031 17:18:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:07.031 17:18:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.031 17:18:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:07.031 Found net devices under 0000:af:00.0: cvl_0_0 00:08:07.031 17:18:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.031 17:18:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:07.031 17:18:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.031 17:18:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:07.031 17:18:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.031 17:18:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:07.031 Found net devices under 0000:af:00.1: cvl_0_1 00:08:07.031 17:18:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.031 17:18:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:07.031 17:18:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:07.031 17:18:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:07.031 17:18:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:07.031 17:18:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:07.031 17:18:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.031 17:18:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.031 17:18:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.031 17:18:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:07.031 17:18:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.031 17:18:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.031 17:18:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:07.031 17:18:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.031 17:18:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.031 17:18:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:07.031 17:18:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:07.031 17:18:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.031 17:18:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.031 17:18:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.031 17:18:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.031 17:18:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:07.031 17:18:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.031 17:18:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.031 17:18:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.031 17:18:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:07.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:08:07.031 00:08:07.031 --- 10.0.0.2 ping statistics --- 00:08:07.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.031 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:07.031 17:18:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:08:07.031 00:08:07.031 --- 10.0.0.1 ping statistics --- 00:08:07.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.031 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:07.031 17:18:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.031 17:18:44 -- nvmf/common.sh@410 -- # return 0 00:08:07.031 17:18:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.031 17:18:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.031 17:18:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:07.031 17:18:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:07.031 17:18:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.031 17:18:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:07.031 17:18:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:07.031 17:18:45 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:07.031 17:18:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.031 17:18:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:07.031 17:18:45 -- common/autotest_common.sh@10 -- # set +x 00:08:07.031 17:18:45 -- nvmf/common.sh@469 -- # nvmfpid=3959039 00:08:07.031 17:18:45 -- nvmf/common.sh@470 -- # waitforlisten 3959039 00:08:07.031 17:18:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.031 17:18:45 -- common/autotest_common.sh@819 -- # '[' -z 3959039 ']' 00:08:07.031 17:18:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.031 17:18:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:07.031 17:18:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.031 17:18:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:07.031 17:18:45 -- common/autotest_common.sh@10 -- # set +x 00:08:07.031 [2024-07-12 17:18:45.075639] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:07.031 [2024-07-12 17:18:45.075697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.031 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.031 [2024-07-12 17:18:45.161112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.031 [2024-07-12 17:18:45.203487] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:07.031 [2024-07-12 17:18:45.203639] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.031 [2024-07-12 17:18:45.203651] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.031 [2024-07-12 17:18:45.203660] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.031 [2024-07-12 17:18:45.203761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.031 [2024-07-12 17:18:45.203863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.031 [2024-07-12 17:18:45.203932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.031 [2024-07-12 17:18:45.203935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.031 17:18:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:07.031 17:18:45 -- common/autotest_common.sh@852 -- # return 0 00:08:07.031 17:18:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:07.031 17:18:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:07.031 17:18:45 -- common/autotest_common.sh@10 -- # set +x 00:08:07.031 17:18:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.031 17:18:45 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.031 17:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.031 17:18:45 -- common/autotest_common.sh@10 -- # set +x 00:08:07.031 [2024-07-12 17:18:45.985054] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.031 17:18:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.031 17:18:45 -- target/discovery.sh@26 -- # seq 1 4 00:08:07.288 17:18:45 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.288 17:18:45 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:07.289 17:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:45 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 Null1 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 [2024-07-12 17:18:46.033374] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.289 17:18:46 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 Null2 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.289 17:18:46 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 Null3 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.289 17:18:46 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 Null4 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.289 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.289 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.289 17:18:46 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:07.556 00:08:07.556 Discovery Log Number of Records 6, Generation counter 6 00:08:07.556 =====Discovery Log Entry 0====== 00:08:07.556 trtype: tcp 00:08:07.556 adrfam: ipv4 00:08:07.556 subtype: current discovery subsystem 00:08:07.556 treq: not required 00:08:07.556 portid: 0 00:08:07.556 trsvcid: 4420 00:08:07.556 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.556 traddr: 10.0.0.2 00:08:07.556 eflags: explicit discovery connections, duplicate discovery information 00:08:07.556 sectype: none 00:08:07.556 =====Discovery Log Entry 1====== 00:08:07.556 trtype: tcp 00:08:07.556 adrfam: ipv4 00:08:07.556 subtype: nvme subsystem 00:08:07.556 treq: not required 00:08:07.556 portid: 0 00:08:07.556 trsvcid: 4420 00:08:07.556 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:07.556 traddr: 10.0.0.2 00:08:07.556 eflags: none 00:08:07.556 sectype: none 00:08:07.556 =====Discovery Log Entry 2====== 00:08:07.556 trtype: tcp 00:08:07.556 adrfam: ipv4 00:08:07.556 subtype: nvme subsystem 00:08:07.556 treq: not required 00:08:07.556 portid: 0 00:08:07.556 trsvcid: 4420 00:08:07.556 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:07.556 traddr: 10.0.0.2 00:08:07.556 eflags: none 00:08:07.556 sectype: none 00:08:07.556 =====Discovery Log Entry 3====== 00:08:07.556 trtype: tcp 00:08:07.556 adrfam: ipv4 00:08:07.556 subtype: nvme subsystem 00:08:07.556 treq: not required 00:08:07.556 portid: 0 00:08:07.556 trsvcid: 4420 00:08:07.556 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:07.556 traddr: 10.0.0.2 00:08:07.556 eflags: none 00:08:07.556 sectype: none 00:08:07.556 =====Discovery Log Entry 4====== 00:08:07.556 trtype: tcp 00:08:07.556 adrfam: ipv4 00:08:07.556 subtype: nvme subsystem 00:08:07.556 treq: not required 00:08:07.556 portid: 0 00:08:07.556 trsvcid: 4420 00:08:07.556 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:07.556 traddr: 10.0.0.2 00:08:07.556 eflags: none 00:08:07.556 sectype: none 00:08:07.556 =====Discovery Log Entry 5====== 00:08:07.556 trtype: tcp 00:08:07.556 adrfam: ipv4 00:08:07.556 subtype: discovery subsystem referral 00:08:07.556 treq: not required 00:08:07.556 portid: 0 00:08:07.556 trsvcid: 4430 00:08:07.556 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.556 traddr: 10.0.0.2 00:08:07.556 eflags: none 00:08:07.556 sectype: none 00:08:07.556 17:18:46 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:07.556 Perform nvmf subsystem discovery via RPC 00:08:07.556 17:18:46 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:07.556 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.556 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 [2024-07-12 17:18:46.322262] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:07.556 [ 00:08:07.556 { 00:08:07.556 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:07.556 "subtype": "Discovery", 00:08:07.556 "listen_addresses": [ 00:08:07.556 { 00:08:07.556 "transport": "TCP", 00:08:07.556 "trtype": "TCP", 00:08:07.556 "adrfam": "IPv4", 00:08:07.556 "traddr": "10.0.0.2", 00:08:07.556 "trsvcid": "4420" 00:08:07.556 } 00:08:07.556 ], 00:08:07.556 "allow_any_host": true, 00:08:07.556 "hosts": [] 00:08:07.556 }, 00:08:07.556 { 00:08:07.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.556 "subtype": "NVMe", 00:08:07.556 "listen_addresses": [ 00:08:07.556 { 00:08:07.556 "transport": "TCP", 00:08:07.556 "trtype": "TCP", 00:08:07.556 "adrfam": "IPv4", 00:08:07.556 "traddr": "10.0.0.2", 00:08:07.556 "trsvcid": "4420" 00:08:07.556 } 00:08:07.556 ], 00:08:07.556 "allow_any_host": true, 00:08:07.556 "hosts": [], 00:08:07.556 "serial_number": "SPDK00000000000001", 00:08:07.556 "model_number": "SPDK bdev Controller", 00:08:07.556 "max_namespaces": 32, 00:08:07.556 "min_cntlid": 1, 00:08:07.556 "max_cntlid": 65519, 00:08:07.556 "namespaces": [ 00:08:07.556 { 00:08:07.556 "nsid": 1, 00:08:07.556 "bdev_name": "Null1", 00:08:07.556 "name": "Null1", 00:08:07.556 "nguid": "B5BF6F12114047238140070C38C12BCA", 00:08:07.556 "uuid": "b5bf6f12-1140-4723-8140-070c38c12bca" 00:08:07.556 } 00:08:07.556 ] 00:08:07.556 }, 00:08:07.556 { 00:08:07.556 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:07.556 "subtype": "NVMe", 00:08:07.556 "listen_addresses": [ 00:08:07.556 { 00:08:07.556 "transport": "TCP", 00:08:07.556 "trtype": "TCP", 00:08:07.556 "adrfam": "IPv4", 00:08:07.556 "traddr": "10.0.0.2", 00:08:07.556 "trsvcid": "4420" 00:08:07.556 } 00:08:07.556 ], 00:08:07.556 "allow_any_host": true, 00:08:07.556 "hosts": [], 00:08:07.556 "serial_number": "SPDK00000000000002", 00:08:07.556 "model_number": "SPDK bdev Controller", 00:08:07.556 "max_namespaces": 32, 00:08:07.556 "min_cntlid": 1, 00:08:07.556 "max_cntlid": 65519, 00:08:07.556 "namespaces": [ 00:08:07.556 { 00:08:07.556 "nsid": 1, 00:08:07.556 "bdev_name": "Null2", 00:08:07.556 "name": "Null2", 00:08:07.556 "nguid": "96EFF5A2A15F4F23BBCD724687266A33", 00:08:07.556 "uuid": "96eff5a2-a15f-4f23-bbcd-724687266a33" 00:08:07.556 } 00:08:07.556 ] 00:08:07.556 }, 00:08:07.556 { 00:08:07.556 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:07.556 "subtype": "NVMe", 00:08:07.556 "listen_addresses": [ 00:08:07.556 { 00:08:07.556 "transport": "TCP", 00:08:07.556 "trtype": "TCP", 00:08:07.556 "adrfam": "IPv4", 00:08:07.556 "traddr": "10.0.0.2", 00:08:07.556 "trsvcid": "4420" 00:08:07.556 } 00:08:07.556 ], 00:08:07.556 "allow_any_host": true, 00:08:07.556 "hosts": [], 00:08:07.556 "serial_number": "SPDK00000000000003", 00:08:07.556 "model_number": "SPDK bdev Controller", 00:08:07.556 "max_namespaces": 32, 00:08:07.556 "min_cntlid": 1, 00:08:07.556 "max_cntlid": 65519, 00:08:07.556 "namespaces": [ 00:08:07.556 { 00:08:07.556 "nsid": 1, 00:08:07.556 "bdev_name": "Null3", 00:08:07.556 "name": "Null3", 00:08:07.556 "nguid": "A23F5DEC7B0B4782B06403EC459C7752", 00:08:07.556 "uuid": "a23f5dec-7b0b-4782-b064-03ec459c7752" 00:08:07.556 } 00:08:07.556 ] 00:08:07.556 }, 00:08:07.556 { 00:08:07.556 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:07.556 "subtype": "NVMe", 00:08:07.556 "listen_addresses": [ 00:08:07.556 { 00:08:07.556 "transport": "TCP", 00:08:07.556 "trtype": "TCP", 00:08:07.556 "adrfam": "IPv4", 00:08:07.556 "traddr": "10.0.0.2", 00:08:07.556 "trsvcid": "4420" 00:08:07.556 } 00:08:07.556 ], 00:08:07.556 "allow_any_host": true, 00:08:07.556 "hosts": [], 00:08:07.556 "serial_number": "SPDK00000000000004", 00:08:07.556 "model_number": "SPDK bdev Controller", 00:08:07.556 "max_namespaces": 32, 00:08:07.556 "min_cntlid": 1, 00:08:07.556 "max_cntlid": 65519, 00:08:07.556 "namespaces": [ 00:08:07.556 { 00:08:07.556 "nsid": 1, 00:08:07.556 "bdev_name": "Null4", 00:08:07.556 "name": "Null4", 00:08:07.556 "nguid": "DDF83CB09E4C49058F5579432499D95A", 00:08:07.556 "uuid": "ddf83cb0-9e4c-4905-8f55-79432499d95a" 00:08:07.556 } 00:08:07.556 ] 00:08:07.556 } 00:08:07.556 ] 00:08:07.556 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.556 17:18:46 -- target/discovery.sh@42 -- # seq 1 4 00:08:07.556 17:18:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.556 17:18:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.556 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.556 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.556 17:18:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:07.556 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.556 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.556 17:18:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.556 17:18:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:07.556 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.556 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.556 17:18:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:07.556 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.556 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.556 17:18:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.556 17:18:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:07.556 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.556 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.556 17:18:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:07.556 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.557 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.557 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.557 17:18:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.557 17:18:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:07.557 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.557 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.557 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.557 17:18:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:07.557 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.557 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.557 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.557 17:18:46 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.557 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.557 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.557 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.557 17:18:46 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:07.557 17:18:46 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:07.557 17:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.557 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.557 17:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.557 17:18:46 -- target/discovery.sh@49 -- # check_bdevs= 00:08:07.557 17:18:46 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:07.557 17:18:46 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:07.557 17:18:46 -- target/discovery.sh@57 -- # nvmftestfini 00:08:07.557 17:18:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:07.557 17:18:46 -- nvmf/common.sh@116 -- # sync 00:08:07.557 17:18:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:07.557 17:18:46 -- nvmf/common.sh@119 -- # set +e 00:08:07.557 17:18:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:07.557 17:18:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:07.557 rmmod nvme_tcp 00:08:07.557 rmmod nvme_fabrics 00:08:07.557 rmmod nvme_keyring 00:08:07.814 17:18:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:07.814 17:18:46 -- nvmf/common.sh@123 -- # set -e 00:08:07.814 17:18:46 -- nvmf/common.sh@124 -- # return 0 00:08:07.814 17:18:46 -- nvmf/common.sh@477 -- # '[' -n 3959039 ']' 00:08:07.814 17:18:46 -- nvmf/common.sh@478 -- # killprocess 3959039 00:08:07.814 17:18:46 -- common/autotest_common.sh@926 -- # '[' -z 3959039 ']' 00:08:07.814 17:18:46 -- common/autotest_common.sh@930 -- # kill -0 3959039 00:08:07.814 17:18:46 -- common/autotest_common.sh@931 -- # uname 00:08:07.814 17:18:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:07.814 17:18:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3959039 00:08:07.814 17:18:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:07.814 17:18:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:07.814 17:18:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3959039' 00:08:07.814 killing process with pid 3959039 00:08:07.814 17:18:46 -- common/autotest_common.sh@945 -- # kill 3959039 00:08:07.814 [2024-07-12 17:18:46.596357] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:07.814 17:18:46 -- common/autotest_common.sh@950 -- # wait 3959039 00:08:08.071 17:18:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:08.071 17:18:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:08.071 17:18:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:08.071 17:18:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.071 17:18:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:08.071 17:18:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.071 17:18:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.071 17:18:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.969 17:18:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:09.969 00:08:09.969 real 0m9.699s 00:08:09.969 user 0m8.099s 00:08:09.969 sys 0m4.710s 00:08:09.969 17:18:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.969 17:18:48 -- common/autotest_common.sh@10 -- # set +x 00:08:09.969 ************************************ 00:08:09.969 END TEST nvmf_discovery 00:08:09.969 ************************************ 00:08:09.969 17:18:48 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:09.969 17:18:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:09.969 17:18:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.969 17:18:48 -- common/autotest_common.sh@10 -- # set +x 00:08:09.969 ************************************ 00:08:09.969 START TEST nvmf_referrals 00:08:09.969 ************************************ 00:08:09.969 17:18:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.226 * Looking for test storage... 00:08:10.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.226 17:18:48 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.226 17:18:48 -- nvmf/common.sh@7 -- # uname -s 00:08:10.226 17:18:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.226 17:18:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.226 17:18:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.226 17:18:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.226 17:18:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.226 17:18:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.226 17:18:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.227 17:18:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.227 17:18:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.227 17:18:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.227 17:18:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:10.227 17:18:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:10.227 17:18:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.227 17:18:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.227 17:18:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.227 17:18:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.227 17:18:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.227 17:18:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.227 17:18:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.227 17:18:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.227 17:18:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.227 17:18:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.227 17:18:49 -- paths/export.sh@5 -- # export PATH 00:08:10.227 17:18:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.227 17:18:49 -- nvmf/common.sh@46 -- # : 0 00:08:10.227 17:18:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:10.227 17:18:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:10.227 17:18:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:10.227 17:18:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.227 17:18:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.227 17:18:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:10.227 17:18:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:10.227 17:18:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:10.227 17:18:49 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:10.227 17:18:49 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:10.227 17:18:49 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:10.227 17:18:49 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:10.227 17:18:49 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:10.227 17:18:49 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:10.227 17:18:49 -- target/referrals.sh@37 -- # nvmftestinit 00:08:10.227 17:18:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:10.227 17:18:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.227 17:18:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:10.227 17:18:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:10.227 17:18:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:10.227 17:18:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.227 17:18:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.227 17:18:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.227 17:18:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:10.227 17:18:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:10.227 17:18:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:10.227 17:18:49 -- common/autotest_common.sh@10 -- # set +x 00:08:15.493 17:18:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:15.493 17:18:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:15.493 17:18:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:15.493 17:18:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:15.493 17:18:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:15.493 17:18:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:15.493 17:18:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:15.493 17:18:54 -- nvmf/common.sh@294 -- # net_devs=() 00:08:15.493 17:18:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:15.493 17:18:54 -- nvmf/common.sh@295 -- # e810=() 00:08:15.493 17:18:54 -- nvmf/common.sh@295 -- # local -ga e810 00:08:15.493 17:18:54 -- nvmf/common.sh@296 -- # x722=() 00:08:15.493 17:18:54 -- nvmf/common.sh@296 -- # local -ga x722 00:08:15.493 17:18:54 -- nvmf/common.sh@297 -- # mlx=() 00:08:15.493 17:18:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:15.493 17:18:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.493 17:18:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:15.493 17:18:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:15.493 17:18:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:15.493 17:18:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:15.493 17:18:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:15.493 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:15.493 17:18:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:15.493 17:18:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:15.493 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:15.493 17:18:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:15.493 17:18:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:15.493 17:18:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.493 17:18:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:15.493 17:18:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.493 17:18:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:15.493 Found net devices under 0000:af:00.0: cvl_0_0 00:08:15.493 17:18:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.493 17:18:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:15.493 17:18:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.493 17:18:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:15.493 17:18:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.493 17:18:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:15.493 Found net devices under 0000:af:00.1: cvl_0_1 00:08:15.493 17:18:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.493 17:18:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:15.493 17:18:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:15.493 17:18:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:15.493 17:18:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:15.493 17:18:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.493 17:18:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.493 17:18:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.493 17:18:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:15.493 17:18:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.493 17:18:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.493 17:18:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:15.493 17:18:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.493 17:18:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.493 17:18:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:15.493 17:18:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:15.493 17:18:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.493 17:18:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.493 17:18:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.493 17:18:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.752 17:18:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:15.752 17:18:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.752 17:18:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.752 17:18:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.752 17:18:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:15.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:15.752 00:08:15.752 --- 10.0.0.2 ping statistics --- 00:08:15.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.752 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:15.752 17:18:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:08:15.752 00:08:15.752 --- 10.0.0.1 ping statistics --- 00:08:15.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.752 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:15.752 17:18:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.752 17:18:54 -- nvmf/common.sh@410 -- # return 0 00:08:15.752 17:18:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:15.752 17:18:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.752 17:18:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:15.752 17:18:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:15.752 17:18:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.752 17:18:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:15.752 17:18:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:15.752 17:18:54 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:15.752 17:18:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:15.752 17:18:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:15.752 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:08:15.752 17:18:54 -- nvmf/common.sh@469 -- # nvmfpid=3962951 00:08:15.752 17:18:54 -- nvmf/common.sh@470 -- # waitforlisten 3962951 00:08:15.752 17:18:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.752 17:18:54 -- common/autotest_common.sh@819 -- # '[' -z 3962951 ']' 00:08:15.752 17:18:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.752 17:18:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:15.752 17:18:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.752 17:18:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:15.752 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:08:15.752 [2024-07-12 17:18:54.695402] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:15.752 [2024-07-12 17:18:54.695468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.010 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.010 [2024-07-12 17:18:54.784321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.010 [2024-07-12 17:18:54.828218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:16.010 [2024-07-12 17:18:54.828372] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.010 [2024-07-12 17:18:54.828384] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.010 [2024-07-12 17:18:54.828394] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.010 [2024-07-12 17:18:54.828439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.010 [2024-07-12 17:18:54.828540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.010 [2024-07-12 17:18:54.828632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.010 [2024-07-12 17:18:54.828634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.937 17:18:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:16.937 17:18:55 -- common/autotest_common.sh@852 -- # return 0 00:08:16.937 17:18:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:16.937 17:18:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:16.937 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.937 17:18:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.937 17:18:55 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.937 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.937 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.937 [2024-07-12 17:18:55.673218] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.937 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.937 17:18:55 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:16.937 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.937 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.937 [2024-07-12 17:18:55.689392] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:16.937 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.937 17:18:55 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:16.937 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.937 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.937 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.937 17:18:55 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:16.937 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.937 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.937 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.937 17:18:55 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:16.937 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.937 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.937 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.937 17:18:55 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.937 17:18:55 -- target/referrals.sh@48 -- # jq length 00:08:16.937 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.937 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.937 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.937 17:18:55 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:16.937 17:18:55 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:16.937 17:18:55 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:16.937 17:18:55 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:16.937 17:18:55 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.937 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.937 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.937 17:18:55 -- target/referrals.sh@21 -- # sort 00:08:16.937 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.937 17:18:55 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:16.937 17:18:55 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:16.937 17:18:55 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:16.937 17:18:55 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.937 17:18:55 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.937 17:18:55 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.937 17:18:55 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.937 17:18:55 -- target/referrals.sh@26 -- # sort 00:08:17.193 17:18:55 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:17.193 17:18:55 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:17.193 17:18:55 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:17.193 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.193 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:17.193 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.193 17:18:55 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:17.193 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.193 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:17.193 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.193 17:18:55 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:17.193 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.193 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:17.193 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.193 17:18:55 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.193 17:18:55 -- target/referrals.sh@56 -- # jq length 00:08:17.193 17:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.193 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:17.193 17:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.193 17:18:56 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:17.193 17:18:56 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:17.193 17:18:56 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.193 17:18:56 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.193 17:18:56 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.193 17:18:56 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.193 17:18:56 -- target/referrals.sh@26 -- # sort 00:08:17.193 17:18:56 -- target/referrals.sh@26 -- # echo 00:08:17.194 17:18:56 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:17.194 17:18:56 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:17.194 17:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.194 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.194 17:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.194 17:18:56 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:17.194 17:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.194 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.194 17:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.194 17:18:56 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:17.194 17:18:56 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.194 17:18:56 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.194 17:18:56 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.194 17:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.194 17:18:56 -- target/referrals.sh@21 -- # sort 00:08:17.194 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.194 17:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.450 17:18:56 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:17.450 17:18:56 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:17.450 17:18:56 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:17.450 17:18:56 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.450 17:18:56 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.450 17:18:56 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.450 17:18:56 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.450 17:18:56 -- target/referrals.sh@26 -- # sort 00:08:17.450 17:18:56 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:17.450 17:18:56 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:17.450 17:18:56 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:17.450 17:18:56 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:17.450 17:18:56 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:17.450 17:18:56 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.450 17:18:56 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:17.705 17:18:56 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:17.705 17:18:56 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:17.705 17:18:56 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:17.705 17:18:56 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:17.705 17:18:56 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.705 17:18:56 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:17.705 17:18:56 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:17.705 17:18:56 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:17.705 17:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.705 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 17:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.705 17:18:56 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:17.705 17:18:56 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.705 17:18:56 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.705 17:18:56 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.705 17:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.705 17:18:56 -- target/referrals.sh@21 -- # sort 00:08:17.705 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 17:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.705 17:18:56 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:17.705 17:18:56 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:17.706 17:18:56 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:17.706 17:18:56 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.706 17:18:56 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.961 17:18:56 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.961 17:18:56 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.961 17:18:56 -- target/referrals.sh@26 -- # sort 00:08:17.961 17:18:56 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:17.961 17:18:56 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:17.961 17:18:56 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:17.961 17:18:56 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:17.961 17:18:56 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:17.961 17:18:56 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.961 17:18:56 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:18.217 17:18:56 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:18.217 17:18:56 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:18.217 17:18:56 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:18.217 17:18:56 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:18.217 17:18:56 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.217 17:18:56 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:18.217 17:18:57 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:18.217 17:18:57 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:18.217 17:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.217 17:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:18.217 17:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.217 17:18:57 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.217 17:18:57 -- target/referrals.sh@82 -- # jq length 00:08:18.217 17:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.217 17:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:18.217 17:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.217 17:18:57 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:18.217 17:18:57 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:18.217 17:18:57 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.217 17:18:57 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.217 17:18:57 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.217 17:18:57 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.217 17:18:57 -- target/referrals.sh@26 -- # sort 00:08:18.473 17:18:57 -- target/referrals.sh@26 -- # echo 00:08:18.473 17:18:57 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:18.473 17:18:57 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:18.473 17:18:57 -- target/referrals.sh@86 -- # nvmftestfini 00:08:18.473 17:18:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:18.473 17:18:57 -- nvmf/common.sh@116 -- # sync 00:08:18.473 17:18:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:18.473 17:18:57 -- nvmf/common.sh@119 -- # set +e 00:08:18.473 17:18:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:18.473 17:18:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:18.473 rmmod nvme_tcp 00:08:18.473 rmmod nvme_fabrics 00:08:18.473 rmmod nvme_keyring 00:08:18.473 17:18:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:18.473 17:18:57 -- nvmf/common.sh@123 -- # set -e 00:08:18.473 17:18:57 -- nvmf/common.sh@124 -- # return 0 00:08:18.473 17:18:57 -- nvmf/common.sh@477 -- # '[' -n 3962951 ']' 00:08:18.473 17:18:57 -- nvmf/common.sh@478 -- # killprocess 3962951 00:08:18.473 17:18:57 -- common/autotest_common.sh@926 -- # '[' -z 3962951 ']' 00:08:18.473 17:18:57 -- common/autotest_common.sh@930 -- # kill -0 3962951 00:08:18.473 17:18:57 -- common/autotest_common.sh@931 -- # uname 00:08:18.473 17:18:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:18.473 17:18:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3962951 00:08:18.473 17:18:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:18.473 17:18:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:18.473 17:18:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3962951' 00:08:18.473 killing process with pid 3962951 00:08:18.473 17:18:57 -- common/autotest_common.sh@945 -- # kill 3962951 00:08:18.473 17:18:57 -- common/autotest_common.sh@950 -- # wait 3962951 00:08:18.731 17:18:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:18.731 17:18:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:18.731 17:18:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:18.731 17:18:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.731 17:18:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:18.731 17:18:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.731 17:18:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.731 17:18:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.255 17:18:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:21.255 00:08:21.255 real 0m10.728s 00:08:21.255 user 0m13.376s 00:08:21.255 sys 0m4.970s 00:08:21.255 17:18:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.255 17:18:59 -- common/autotest_common.sh@10 -- # set +x 00:08:21.255 ************************************ 00:08:21.255 END TEST nvmf_referrals 00:08:21.255 ************************************ 00:08:21.255 17:18:59 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:21.255 17:18:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:21.255 17:18:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.255 17:18:59 -- common/autotest_common.sh@10 -- # set +x 00:08:21.255 ************************************ 00:08:21.255 START TEST nvmf_connect_disconnect 00:08:21.255 ************************************ 00:08:21.255 17:18:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:21.255 * Looking for test storage... 00:08:21.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.255 17:18:59 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.255 17:18:59 -- nvmf/common.sh@7 -- # uname -s 00:08:21.255 17:18:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.255 17:18:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.255 17:18:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.255 17:18:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.255 17:18:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.255 17:18:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.255 17:18:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.255 17:18:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.255 17:18:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.255 17:18:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.255 17:18:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:21.255 17:18:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:21.255 17:18:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.255 17:18:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.255 17:18:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.255 17:18:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.255 17:18:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.255 17:18:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.255 17:18:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.255 17:18:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.255 17:18:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.255 17:18:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.255 17:18:59 -- paths/export.sh@5 -- # export PATH 00:08:21.255 17:18:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.255 17:18:59 -- nvmf/common.sh@46 -- # : 0 00:08:21.255 17:18:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:21.255 17:18:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:21.255 17:18:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:21.255 17:18:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.255 17:18:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.255 17:18:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:21.255 17:18:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:21.255 17:18:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:21.255 17:18:59 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.255 17:18:59 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.255 17:18:59 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:21.255 17:18:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:21.255 17:18:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.255 17:18:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:21.255 17:18:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:21.255 17:18:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:21.255 17:18:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.255 17:18:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.255 17:18:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.255 17:18:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:21.255 17:18:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:21.255 17:18:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:21.255 17:18:59 -- common/autotest_common.sh@10 -- # set +x 00:08:26.624 17:19:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:26.624 17:19:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:26.624 17:19:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:26.624 17:19:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:26.624 17:19:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:26.624 17:19:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:26.624 17:19:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:26.624 17:19:05 -- nvmf/common.sh@294 -- # net_devs=() 00:08:26.624 17:19:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:26.624 17:19:05 -- nvmf/common.sh@295 -- # e810=() 00:08:26.624 17:19:05 -- nvmf/common.sh@295 -- # local -ga e810 00:08:26.624 17:19:05 -- nvmf/common.sh@296 -- # x722=() 00:08:26.624 17:19:05 -- nvmf/common.sh@296 -- # local -ga x722 00:08:26.624 17:19:05 -- nvmf/common.sh@297 -- # mlx=() 00:08:26.624 17:19:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:26.624 17:19:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.624 17:19:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:26.624 17:19:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:26.625 17:19:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:26.625 17:19:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:26.625 17:19:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:26.625 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:26.625 17:19:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:26.625 17:19:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:26.625 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:26.625 17:19:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:26.625 17:19:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:26.625 17:19:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.625 17:19:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:26.625 17:19:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.625 17:19:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:26.625 Found net devices under 0000:af:00.0: cvl_0_0 00:08:26.625 17:19:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.625 17:19:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:26.625 17:19:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.625 17:19:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:26.625 17:19:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.625 17:19:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:26.625 Found net devices under 0000:af:00.1: cvl_0_1 00:08:26.625 17:19:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.625 17:19:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:26.625 17:19:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:26.625 17:19:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:26.625 17:19:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.625 17:19:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.625 17:19:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.625 17:19:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:26.625 17:19:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.625 17:19:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.625 17:19:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:26.625 17:19:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.625 17:19:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.625 17:19:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:26.625 17:19:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:26.625 17:19:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.625 17:19:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.625 17:19:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.625 17:19:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.625 17:19:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:26.625 17:19:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.625 17:19:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.625 17:19:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.625 17:19:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:26.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:08:26.625 00:08:26.625 --- 10.0.0.2 ping statistics --- 00:08:26.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.625 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:26.625 17:19:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:08:26.625 00:08:26.625 --- 10.0.0.1 ping statistics --- 00:08:26.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.625 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:26.625 17:19:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.625 17:19:05 -- nvmf/common.sh@410 -- # return 0 00:08:26.625 17:19:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:26.625 17:19:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.625 17:19:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:26.625 17:19:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.625 17:19:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:26.625 17:19:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:26.625 17:19:05 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:26.625 17:19:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:26.625 17:19:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:26.625 17:19:05 -- common/autotest_common.sh@10 -- # set +x 00:08:26.883 17:19:05 -- nvmf/common.sh@469 -- # nvmfpid=3967208 00:08:26.883 17:19:05 -- nvmf/common.sh@470 -- # waitforlisten 3967208 00:08:26.883 17:19:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.883 17:19:05 -- common/autotest_common.sh@819 -- # '[' -z 3967208 ']' 00:08:26.883 17:19:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.883 17:19:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:26.883 17:19:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.883 17:19:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:26.883 17:19:05 -- common/autotest_common.sh@10 -- # set +x 00:08:26.883 [2024-07-12 17:19:05.645120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:26.883 [2024-07-12 17:19:05.645175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.883 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.883 [2024-07-12 17:19:05.736244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.883 [2024-07-12 17:19:05.779904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.883 [2024-07-12 17:19:05.780049] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.883 [2024-07-12 17:19:05.780059] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.883 [2024-07-12 17:19:05.780069] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.883 [2024-07-12 17:19:05.780124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.883 [2024-07-12 17:19:05.780143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.883 [2024-07-12 17:19:05.780235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.883 [2024-07-12 17:19:05.780238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.813 17:19:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:27.813 17:19:06 -- common/autotest_common.sh@852 -- # return 0 00:08:27.813 17:19:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:27.813 17:19:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:27.813 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:27.813 17:19:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.813 17:19:06 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:27.813 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.813 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:27.813 [2024-07-12 17:19:06.617284] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.814 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:27.814 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.814 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:27.814 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:27.814 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.814 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:27.814 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:27.814 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.814 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:27.814 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.814 17:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.814 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:08:27.814 [2024-07-12 17:19:06.673160] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.814 17:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:27.814 17:19:06 -- target/connect_disconnect.sh@34 -- # set +x 00:08:30.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.899 [2024-07-12 17:21:01.798081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bc2a0 is same with the state(5) to be set 00:10:22.899 [2024-07-12 17:21:01.798151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bc2a0 is same with the state(5) to be set 00:10:22.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.827 17:22:56 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:17.827 17:22:56 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:17.827 17:22:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:17.827 17:22:56 -- nvmf/common.sh@116 -- # sync 00:12:17.827 17:22:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:17.827 17:22:56 -- nvmf/common.sh@119 -- # set +e 00:12:17.827 17:22:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:17.827 17:22:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:17.827 rmmod nvme_tcp 00:12:17.827 rmmod nvme_fabrics 00:12:17.827 rmmod nvme_keyring 00:12:17.827 17:22:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:17.827 17:22:56 -- nvmf/common.sh@123 -- # set -e 00:12:17.827 17:22:56 -- nvmf/common.sh@124 -- # return 0 00:12:17.827 17:22:56 -- nvmf/common.sh@477 -- # '[' -n 3967208 ']' 00:12:17.827 17:22:56 -- nvmf/common.sh@478 -- # killprocess 3967208 00:12:17.827 17:22:56 -- common/autotest_common.sh@926 -- # '[' -z 3967208 ']' 00:12:17.827 17:22:56 -- common/autotest_common.sh@930 -- # kill -0 3967208 00:12:17.827 17:22:56 -- common/autotest_common.sh@931 -- # uname 00:12:17.827 17:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:17.827 17:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3967208 00:12:17.827 17:22:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:17.827 17:22:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:17.827 17:22:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3967208' 00:12:17.827 killing process with pid 3967208 00:12:17.827 17:22:56 -- common/autotest_common.sh@945 -- # kill 3967208 00:12:17.827 17:22:56 -- common/autotest_common.sh@950 -- # wait 3967208 00:12:18.086 17:22:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.086 17:22:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.086 17:22:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.086 17:22:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.086 17:22:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.086 17:22:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.086 17:22:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.086 17:22:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.622 17:22:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:20.622 00:12:20.622 real 3m59.317s 00:12:20.622 user 15m16.037s 00:12:20.622 sys 0m21.617s 00:12:20.622 17:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.622 17:22:58 -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 ************************************ 00:12:20.622 END TEST nvmf_connect_disconnect 00:12:20.622 ************************************ 00:12:20.622 17:22:59 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.622 17:22:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:20.622 17:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:20.622 17:22:59 -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 ************************************ 00:12:20.622 START TEST nvmf_multitarget 00:12:20.622 ************************************ 00:12:20.622 17:22:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.622 * Looking for test storage... 00:12:20.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.622 17:22:59 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.622 17:22:59 -- nvmf/common.sh@7 -- # uname -s 00:12:20.622 17:22:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.622 17:22:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.622 17:22:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.622 17:22:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.622 17:22:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.622 17:22:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.622 17:22:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.622 17:22:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.622 17:22:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.623 17:22:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.623 17:22:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:20.623 17:22:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:20.623 17:22:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.623 17:22:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.623 17:22:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.623 17:22:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.623 17:22:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.623 17:22:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.623 17:22:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.623 17:22:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.623 17:22:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.623 17:22:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.623 17:22:59 -- paths/export.sh@5 -- # export PATH 00:12:20.623 17:22:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.623 17:22:59 -- nvmf/common.sh@46 -- # : 0 00:12:20.623 17:22:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:20.623 17:22:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:20.623 17:22:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:20.623 17:22:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.623 17:22:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.623 17:22:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:20.623 17:22:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:20.623 17:22:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:20.623 17:22:59 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:20.623 17:22:59 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:20.623 17:22:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:20.623 17:22:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.623 17:22:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:20.623 17:22:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:20.623 17:22:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:20.623 17:22:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.623 17:22:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.623 17:22:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.623 17:22:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:20.623 17:22:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:20.623 17:22:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:20.623 17:22:59 -- common/autotest_common.sh@10 -- # set +x 00:12:25.900 17:23:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:25.900 17:23:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:25.900 17:23:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:25.900 17:23:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:25.900 17:23:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:25.900 17:23:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:25.900 17:23:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:25.900 17:23:03 -- nvmf/common.sh@294 -- # net_devs=() 00:12:25.900 17:23:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:25.900 17:23:03 -- nvmf/common.sh@295 -- # e810=() 00:12:25.900 17:23:03 -- nvmf/common.sh@295 -- # local -ga e810 00:12:25.900 17:23:03 -- nvmf/common.sh@296 -- # x722=() 00:12:25.900 17:23:03 -- nvmf/common.sh@296 -- # local -ga x722 00:12:25.900 17:23:03 -- nvmf/common.sh@297 -- # mlx=() 00:12:25.900 17:23:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:25.900 17:23:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.900 17:23:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:25.900 17:23:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:25.900 17:23:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:25.900 17:23:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:25.900 17:23:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:25.900 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:25.900 17:23:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:25.900 17:23:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:25.900 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:25.900 17:23:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:25.900 17:23:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:25.900 17:23:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.900 17:23:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:25.900 17:23:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.900 17:23:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:25.900 Found net devices under 0000:af:00.0: cvl_0_0 00:12:25.900 17:23:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.900 17:23:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:25.900 17:23:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.900 17:23:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:25.900 17:23:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.900 17:23:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:25.900 Found net devices under 0000:af:00.1: cvl_0_1 00:12:25.900 17:23:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.900 17:23:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:25.900 17:23:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:25.900 17:23:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:25.900 17:23:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:25.900 17:23:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.900 17:23:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.900 17:23:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.900 17:23:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:25.900 17:23:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.900 17:23:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.900 17:23:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:25.900 17:23:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.900 17:23:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.900 17:23:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:25.900 17:23:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:25.900 17:23:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.900 17:23:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.900 17:23:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.900 17:23:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.900 17:23:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:25.900 17:23:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.900 17:23:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.900 17:23:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.900 17:23:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:25.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:12:25.900 00:12:25.900 --- 10.0.0.2 ping statistics --- 00:12:25.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.900 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:12:25.900 17:23:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:12:25.900 00:12:25.900 --- 10.0.0.1 ping statistics --- 00:12:25.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.900 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:12:25.900 17:23:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.900 17:23:04 -- nvmf/common.sh@410 -- # return 0 00:12:25.900 17:23:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:25.900 17:23:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.900 17:23:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:25.900 17:23:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:25.900 17:23:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.900 17:23:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:25.900 17:23:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:25.900 17:23:04 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:25.900 17:23:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:25.900 17:23:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:25.900 17:23:04 -- common/autotest_common.sh@10 -- # set +x 00:12:25.900 17:23:04 -- nvmf/common.sh@469 -- # nvmfpid=4014782 00:12:25.900 17:23:04 -- nvmf/common.sh@470 -- # waitforlisten 4014782 00:12:25.900 17:23:04 -- common/autotest_common.sh@819 -- # '[' -z 4014782 ']' 00:12:25.900 17:23:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.900 17:23:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:25.900 17:23:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.900 17:23:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.900 17:23:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:25.900 17:23:04 -- common/autotest_common.sh@10 -- # set +x 00:12:25.900 [2024-07-12 17:23:04.143162] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:25.900 [2024-07-12 17:23:04.143215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.900 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.900 [2024-07-12 17:23:04.229144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.900 [2024-07-12 17:23:04.271744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:25.900 [2024-07-12 17:23:04.271886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.900 [2024-07-12 17:23:04.271897] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.900 [2024-07-12 17:23:04.271906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.900 [2024-07-12 17:23:04.271951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.900 [2024-07-12 17:23:04.272054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.900 [2024-07-12 17:23:04.272144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.900 [2024-07-12 17:23:04.272147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.159 17:23:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:26.159 17:23:05 -- common/autotest_common.sh@852 -- # return 0 00:12:26.159 17:23:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:26.159 17:23:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:26.159 17:23:05 -- common/autotest_common.sh@10 -- # set +x 00:12:26.159 17:23:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.159 17:23:05 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:26.159 17:23:05 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.159 17:23:05 -- target/multitarget.sh@21 -- # jq length 00:12:26.419 17:23:05 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:26.419 17:23:05 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:26.419 "nvmf_tgt_1" 00:12:26.419 17:23:05 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:26.678 "nvmf_tgt_2" 00:12:26.678 17:23:05 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.678 17:23:05 -- target/multitarget.sh@28 -- # jq length 00:12:26.937 17:23:05 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:26.937 17:23:05 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:26.937 true 00:12:26.937 17:23:05 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:26.937 true 00:12:27.196 17:23:05 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.196 17:23:05 -- target/multitarget.sh@35 -- # jq length 00:12:27.196 17:23:06 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:27.196 17:23:06 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:27.196 17:23:06 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:27.196 17:23:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:27.196 17:23:06 -- nvmf/common.sh@116 -- # sync 00:12:27.196 17:23:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:27.196 17:23:06 -- nvmf/common.sh@119 -- # set +e 00:12:27.196 17:23:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:27.196 17:23:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:27.196 rmmod nvme_tcp 00:12:27.196 rmmod nvme_fabrics 00:12:27.196 rmmod nvme_keyring 00:12:27.196 17:23:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:27.196 17:23:06 -- nvmf/common.sh@123 -- # set -e 00:12:27.196 17:23:06 -- nvmf/common.sh@124 -- # return 0 00:12:27.196 17:23:06 -- nvmf/common.sh@477 -- # '[' -n 4014782 ']' 00:12:27.196 17:23:06 -- nvmf/common.sh@478 -- # killprocess 4014782 00:12:27.196 17:23:06 -- common/autotest_common.sh@926 -- # '[' -z 4014782 ']' 00:12:27.196 17:23:06 -- common/autotest_common.sh@930 -- # kill -0 4014782 00:12:27.196 17:23:06 -- common/autotest_common.sh@931 -- # uname 00:12:27.196 17:23:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:27.196 17:23:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4014782 00:12:27.196 17:23:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:27.196 17:23:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:27.196 17:23:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4014782' 00:12:27.196 killing process with pid 4014782 00:12:27.196 17:23:06 -- common/autotest_common.sh@945 -- # kill 4014782 00:12:27.196 17:23:06 -- common/autotest_common.sh@950 -- # wait 4014782 00:12:27.456 17:23:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:27.456 17:23:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:27.456 17:23:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:27.456 17:23:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.456 17:23:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:27.456 17:23:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.456 17:23:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.456 17:23:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.987 17:23:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:29.987 00:12:29.987 real 0m9.355s 00:12:29.987 user 0m10.438s 00:12:29.987 sys 0m4.237s 00:12:29.987 17:23:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.987 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:12:29.987 ************************************ 00:12:29.987 END TEST nvmf_multitarget 00:12:29.987 ************************************ 00:12:29.987 17:23:08 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.987 17:23:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:29.987 17:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:29.987 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:12:29.987 ************************************ 00:12:29.987 START TEST nvmf_rpc 00:12:29.987 ************************************ 00:12:29.987 17:23:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.987 * Looking for test storage... 00:12:29.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.987 17:23:08 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.987 17:23:08 -- nvmf/common.sh@7 -- # uname -s 00:12:29.987 17:23:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.987 17:23:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.987 17:23:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.987 17:23:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.987 17:23:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.987 17:23:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.987 17:23:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.987 17:23:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.987 17:23:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.987 17:23:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.987 17:23:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:29.987 17:23:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:29.987 17:23:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.987 17:23:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.987 17:23:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.987 17:23:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.987 17:23:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.987 17:23:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.987 17:23:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.987 17:23:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.987 17:23:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.987 17:23:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.988 17:23:08 -- paths/export.sh@5 -- # export PATH 00:12:29.988 17:23:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.988 17:23:08 -- nvmf/common.sh@46 -- # : 0 00:12:29.988 17:23:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:29.988 17:23:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:29.988 17:23:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:29.988 17:23:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.988 17:23:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.988 17:23:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:29.988 17:23:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:29.988 17:23:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:29.988 17:23:08 -- target/rpc.sh@11 -- # loops=5 00:12:29.988 17:23:08 -- target/rpc.sh@23 -- # nvmftestinit 00:12:29.988 17:23:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:29.988 17:23:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.988 17:23:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:29.988 17:23:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:29.988 17:23:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:29.988 17:23:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.988 17:23:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.988 17:23:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.988 17:23:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:29.988 17:23:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:29.988 17:23:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:29.988 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.255 17:23:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:35.255 17:23:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:35.255 17:23:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:35.255 17:23:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:35.255 17:23:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:35.255 17:23:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:35.255 17:23:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:35.255 17:23:13 -- nvmf/common.sh@294 -- # net_devs=() 00:12:35.255 17:23:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:35.255 17:23:13 -- nvmf/common.sh@295 -- # e810=() 00:12:35.255 17:23:13 -- nvmf/common.sh@295 -- # local -ga e810 00:12:35.255 17:23:13 -- nvmf/common.sh@296 -- # x722=() 00:12:35.255 17:23:13 -- nvmf/common.sh@296 -- # local -ga x722 00:12:35.255 17:23:13 -- nvmf/common.sh@297 -- # mlx=() 00:12:35.255 17:23:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:35.255 17:23:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.255 17:23:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:35.255 17:23:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:35.255 17:23:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:35.255 17:23:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:35.255 17:23:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:35.255 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:35.255 17:23:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:35.255 17:23:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:35.255 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:35.255 17:23:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:35.255 17:23:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:35.255 17:23:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.255 17:23:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:35.255 17:23:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.255 17:23:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:35.255 Found net devices under 0000:af:00.0: cvl_0_0 00:12:35.255 17:23:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.255 17:23:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:35.255 17:23:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.255 17:23:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:35.255 17:23:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.255 17:23:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:35.255 Found net devices under 0000:af:00.1: cvl_0_1 00:12:35.255 17:23:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.255 17:23:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:35.255 17:23:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:35.255 17:23:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:35.255 17:23:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:35.255 17:23:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.255 17:23:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.255 17:23:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.255 17:23:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:35.255 17:23:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.255 17:23:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.255 17:23:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:35.255 17:23:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.255 17:23:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.255 17:23:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:35.255 17:23:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:35.255 17:23:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.255 17:23:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.255 17:23:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.255 17:23:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.255 17:23:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:35.256 17:23:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.256 17:23:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.256 17:23:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.256 17:23:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:35.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:12:35.256 00:12:35.256 --- 10.0.0.2 ping statistics --- 00:12:35.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.256 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:35.256 17:23:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:12:35.256 00:12:35.256 --- 10.0.0.1 ping statistics --- 00:12:35.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.256 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:35.256 17:23:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.256 17:23:14 -- nvmf/common.sh@410 -- # return 0 00:12:35.256 17:23:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:35.256 17:23:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.256 17:23:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:35.256 17:23:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:35.256 17:23:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.256 17:23:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:35.256 17:23:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:35.256 17:23:14 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:35.256 17:23:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:35.256 17:23:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:35.256 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:12:35.256 17:23:14 -- nvmf/common.sh@469 -- # nvmfpid=4018742 00:12:35.256 17:23:14 -- nvmf/common.sh@470 -- # waitforlisten 4018742 00:12:35.256 17:23:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.256 17:23:14 -- common/autotest_common.sh@819 -- # '[' -z 4018742 ']' 00:12:35.256 17:23:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.256 17:23:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:35.256 17:23:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.256 17:23:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:35.256 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:12:35.256 [2024-07-12 17:23:14.161597] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:35.256 [2024-07-12 17:23:14.161700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.514 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.514 [2024-07-12 17:23:14.289380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.514 [2024-07-12 17:23:14.333190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:35.514 [2024-07-12 17:23:14.333340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.514 [2024-07-12 17:23:14.333350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.514 [2024-07-12 17:23:14.333359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.514 [2024-07-12 17:23:14.333411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.514 [2024-07-12 17:23:14.333514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.514 [2024-07-12 17:23:14.333593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.514 [2024-07-12 17:23:14.333595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.452 17:23:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:36.452 17:23:15 -- common/autotest_common.sh@852 -- # return 0 00:12:36.452 17:23:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:36.452 17:23:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 17:23:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.452 17:23:15 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:36.452 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.452 17:23:15 -- target/rpc.sh@26 -- # stats='{ 00:12:36.452 "tick_rate": 2200000000, 00:12:36.452 "poll_groups": [ 00:12:36.452 { 00:12:36.452 "name": "nvmf_tgt_poll_group_0", 00:12:36.452 "admin_qpairs": 0, 00:12:36.452 "io_qpairs": 0, 00:12:36.452 "current_admin_qpairs": 0, 00:12:36.452 "current_io_qpairs": 0, 00:12:36.452 "pending_bdev_io": 0, 00:12:36.452 "completed_nvme_io": 0, 00:12:36.452 "transports": [] 00:12:36.452 }, 00:12:36.452 { 00:12:36.452 "name": "nvmf_tgt_poll_group_1", 00:12:36.452 "admin_qpairs": 0, 00:12:36.452 "io_qpairs": 0, 00:12:36.452 "current_admin_qpairs": 0, 00:12:36.452 "current_io_qpairs": 0, 00:12:36.452 "pending_bdev_io": 0, 00:12:36.452 "completed_nvme_io": 0, 00:12:36.452 "transports": [] 00:12:36.452 }, 00:12:36.452 { 00:12:36.452 "name": "nvmf_tgt_poll_group_2", 00:12:36.452 "admin_qpairs": 0, 00:12:36.452 "io_qpairs": 0, 00:12:36.452 "current_admin_qpairs": 0, 00:12:36.452 "current_io_qpairs": 0, 00:12:36.452 "pending_bdev_io": 0, 00:12:36.452 "completed_nvme_io": 0, 00:12:36.452 "transports": [] 00:12:36.452 }, 00:12:36.452 { 00:12:36.452 "name": "nvmf_tgt_poll_group_3", 00:12:36.452 "admin_qpairs": 0, 00:12:36.452 "io_qpairs": 0, 00:12:36.452 "current_admin_qpairs": 0, 00:12:36.452 "current_io_qpairs": 0, 00:12:36.452 "pending_bdev_io": 0, 00:12:36.452 "completed_nvme_io": 0, 00:12:36.452 "transports": [] 00:12:36.452 } 00:12:36.452 ] 00:12:36.452 }' 00:12:36.452 17:23:15 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:36.452 17:23:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:36.452 17:23:15 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:36.452 17:23:15 -- target/rpc.sh@15 -- # wc -l 00:12:36.452 17:23:15 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:36.452 17:23:15 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:36.452 17:23:15 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:36.452 17:23:15 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.452 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 [2024-07-12 17:23:15.221334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.452 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.452 17:23:15 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:36.452 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.452 17:23:15 -- target/rpc.sh@33 -- # stats='{ 00:12:36.452 "tick_rate": 2200000000, 00:12:36.452 "poll_groups": [ 00:12:36.452 { 00:12:36.452 "name": "nvmf_tgt_poll_group_0", 00:12:36.452 "admin_qpairs": 0, 00:12:36.452 "io_qpairs": 0, 00:12:36.452 "current_admin_qpairs": 0, 00:12:36.452 "current_io_qpairs": 0, 00:12:36.452 "pending_bdev_io": 0, 00:12:36.452 "completed_nvme_io": 0, 00:12:36.452 "transports": [ 00:12:36.452 { 00:12:36.452 "trtype": "TCP" 00:12:36.452 } 00:12:36.452 ] 00:12:36.452 }, 00:12:36.452 { 00:12:36.452 "name": "nvmf_tgt_poll_group_1", 00:12:36.452 "admin_qpairs": 0, 00:12:36.452 "io_qpairs": 0, 00:12:36.452 "current_admin_qpairs": 0, 00:12:36.452 "current_io_qpairs": 0, 00:12:36.452 "pending_bdev_io": 0, 00:12:36.452 "completed_nvme_io": 0, 00:12:36.452 "transports": [ 00:12:36.452 { 00:12:36.452 "trtype": "TCP" 00:12:36.452 } 00:12:36.452 ] 00:12:36.452 }, 00:12:36.452 { 00:12:36.452 "name": "nvmf_tgt_poll_group_2", 00:12:36.452 "admin_qpairs": 0, 00:12:36.452 "io_qpairs": 0, 00:12:36.452 "current_admin_qpairs": 0, 00:12:36.452 "current_io_qpairs": 0, 00:12:36.452 "pending_bdev_io": 0, 00:12:36.452 "completed_nvme_io": 0, 00:12:36.452 "transports": [ 00:12:36.452 { 00:12:36.452 "trtype": "TCP" 00:12:36.452 } 00:12:36.452 ] 00:12:36.452 }, 00:12:36.452 { 00:12:36.452 "name": "nvmf_tgt_poll_group_3", 00:12:36.452 "admin_qpairs": 0, 00:12:36.452 "io_qpairs": 0, 00:12:36.452 "current_admin_qpairs": 0, 00:12:36.452 "current_io_qpairs": 0, 00:12:36.452 "pending_bdev_io": 0, 00:12:36.452 "completed_nvme_io": 0, 00:12:36.452 "transports": [ 00:12:36.452 { 00:12:36.452 "trtype": "TCP" 00:12:36.452 } 00:12:36.452 ] 00:12:36.452 } 00:12:36.452 ] 00:12:36.452 }' 00:12:36.452 17:23:15 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:36.452 17:23:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:36.452 17:23:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:36.452 17:23:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.452 17:23:15 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:36.452 17:23:15 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:36.452 17:23:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:36.452 17:23:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:36.452 17:23:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.452 17:23:15 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:36.452 17:23:15 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:36.452 17:23:15 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:36.452 17:23:15 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:36.452 17:23:15 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:36.452 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 Malloc1 00:12:36.452 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.452 17:23:15 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:36.452 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.452 17:23:15 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.452 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.452 17:23:15 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:36.452 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.452 17:23:15 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.452 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.452 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.452 [2024-07-12 17:23:15.401541] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.452 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.452 17:23:15 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:36.452 17:23:15 -- common/autotest_common.sh@640 -- # local es=0 00:12:36.452 17:23:15 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:36.452 17:23:15 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:36.453 17:23:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.453 17:23:15 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:36.453 17:23:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.453 17:23:15 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:36.453 17:23:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.453 17:23:15 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:36.453 17:23:15 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:36.453 17:23:15 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:36.712 [2024-07-12 17:23:15.425964] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:12:36.712 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:36.712 could not add new controller: failed to write to nvme-fabrics device 00:12:36.712 17:23:15 -- common/autotest_common.sh@643 -- # es=1 00:12:36.712 17:23:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:36.712 17:23:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:36.712 17:23:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:36.712 17:23:15 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:36.712 17:23:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.712 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.712 17:23:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.712 17:23:15 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.091 17:23:16 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.091 17:23:16 -- common/autotest_common.sh@1177 -- # local i=0 00:12:38.091 17:23:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.091 17:23:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:38.091 17:23:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:40.084 17:23:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:40.084 17:23:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:40.085 17:23:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.085 17:23:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:40.085 17:23:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.085 17:23:18 -- common/autotest_common.sh@1187 -- # return 0 00:12:40.085 17:23:18 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.085 17:23:18 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.085 17:23:18 -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.085 17:23:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:40.085 17:23:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.085 17:23:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:40.085 17:23:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.085 17:23:18 -- common/autotest_common.sh@1210 -- # return 0 00:12:40.085 17:23:18 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:40.085 17:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.085 17:23:18 -- common/autotest_common.sh@10 -- # set +x 00:12:40.085 17:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.085 17:23:18 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.085 17:23:18 -- common/autotest_common.sh@640 -- # local es=0 00:12:40.085 17:23:18 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.085 17:23:18 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:40.085 17:23:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.085 17:23:18 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:40.085 17:23:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.085 17:23:18 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:40.085 17:23:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.085 17:23:18 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:40.085 17:23:18 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:40.085 17:23:18 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.085 [2024-07-12 17:23:18.956639] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:12:40.085 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:40.085 could not add new controller: failed to write to nvme-fabrics device 00:12:40.085 17:23:18 -- common/autotest_common.sh@643 -- # es=1 00:12:40.085 17:23:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:40.085 17:23:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:40.085 17:23:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:40.085 17:23:18 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:40.085 17:23:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.085 17:23:18 -- common/autotest_common.sh@10 -- # set +x 00:12:40.085 17:23:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.085 17:23:18 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.461 17:23:20 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.461 17:23:20 -- common/autotest_common.sh@1177 -- # local i=0 00:12:41.461 17:23:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.461 17:23:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:41.461 17:23:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:43.365 17:23:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:43.365 17:23:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:43.365 17:23:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.624 17:23:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:43.624 17:23:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.624 17:23:22 -- common/autotest_common.sh@1187 -- # return 0 00:12:43.624 17:23:22 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.624 17:23:22 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.624 17:23:22 -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.624 17:23:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:43.624 17:23:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.624 17:23:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:43.624 17:23:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.624 17:23:22 -- common/autotest_common.sh@1210 -- # return 0 00:12:43.624 17:23:22 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.624 17:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.624 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:12:43.624 17:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.624 17:23:22 -- target/rpc.sh@81 -- # seq 1 5 00:12:43.624 17:23:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.624 17:23:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.624 17:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.624 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:12:43.624 17:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.624 17:23:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.624 17:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.624 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:12:43.624 [2024-07-12 17:23:22.462728] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.624 17:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.624 17:23:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.624 17:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.624 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:12:43.624 17:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.624 17:23:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.624 17:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.624 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:12:43.624 17:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.624 17:23:22 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.000 17:23:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.000 17:23:23 -- common/autotest_common.sh@1177 -- # local i=0 00:12:45.000 17:23:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.000 17:23:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:45.000 17:23:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:46.904 17:23:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:46.904 17:23:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:46.904 17:23:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.904 17:23:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:46.904 17:23:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.904 17:23:25 -- common/autotest_common.sh@1187 -- # return 0 00:12:46.904 17:23:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.904 17:23:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.904 17:23:25 -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.904 17:23:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:46.904 17:23:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.904 17:23:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.904 17:23:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:46.904 17:23:25 -- common/autotest_common.sh@1210 -- # return 0 00:12:46.904 17:23:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.904 17:23:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.904 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 17:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.163 17:23:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.163 17:23:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.163 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:12:47.163 17:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.163 17:23:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.163 17:23:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.163 17:23:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.163 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:12:47.163 17:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.163 17:23:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.163 17:23:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.163 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:12:47.163 [2024-07-12 17:23:25.895646] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.163 17:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.163 17:23:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.163 17:23:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.163 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:12:47.163 17:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.163 17:23:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.163 17:23:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.163 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:12:47.163 17:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.163 17:23:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.541 17:23:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.541 17:23:27 -- common/autotest_common.sh@1177 -- # local i=0 00:12:48.541 17:23:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.541 17:23:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:48.541 17:23:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:50.444 17:23:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:50.444 17:23:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:50.444 17:23:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.444 17:23:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:50.444 17:23:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.444 17:23:29 -- common/autotest_common.sh@1187 -- # return 0 00:12:50.444 17:23:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.444 17:23:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.444 17:23:29 -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.444 17:23:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:50.444 17:23:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.444 17:23:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:50.444 17:23:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.444 17:23:29 -- common/autotest_common.sh@1210 -- # return 0 00:12:50.444 17:23:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.444 17:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.444 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.444 17:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.444 17:23:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.444 17:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.444 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.444 17:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.444 17:23:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.444 17:23:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.444 17:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.444 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.703 17:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.703 17:23:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.703 17:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.703 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.703 [2024-07-12 17:23:29.418287] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.703 17:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.703 17:23:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.703 17:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.703 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.703 17:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.703 17:23:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.703 17:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.703 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.703 17:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.703 17:23:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.081 17:23:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.081 17:23:30 -- common/autotest_common.sh@1177 -- # local i=0 00:12:52.081 17:23:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.081 17:23:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:52.081 17:23:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:53.996 17:23:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:53.996 17:23:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:53.996 17:23:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.996 17:23:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:53.996 17:23:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.996 17:23:32 -- common/autotest_common.sh@1187 -- # return 0 00:12:53.996 17:23:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.996 17:23:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.996 17:23:32 -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.996 17:23:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:53.996 17:23:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.996 17:23:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:53.996 17:23:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.996 17:23:32 -- common/autotest_common.sh@1210 -- # return 0 00:12:53.996 17:23:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.996 17:23:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.996 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.996 17:23:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.996 17:23:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.996 17:23:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.996 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.996 17:23:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.996 17:23:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.996 17:23:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.996 17:23:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.996 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.996 17:23:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.996 17:23:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.996 17:23:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.996 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.996 [2024-07-12 17:23:32.939562] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.996 17:23:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.996 17:23:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.996 17:23:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.996 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.996 17:23:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.996 17:23:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.996 17:23:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.996 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.996 17:23:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.996 17:23:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.372 17:23:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.372 17:23:34 -- common/autotest_common.sh@1177 -- # local i=0 00:12:55.372 17:23:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.372 17:23:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:55.372 17:23:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:57.907 17:23:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:57.907 17:23:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:57.907 17:23:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.907 17:23:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:57.907 17:23:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.907 17:23:36 -- common/autotest_common.sh@1187 -- # return 0 00:12:57.907 17:23:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.907 17:23:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.907 17:23:36 -- common/autotest_common.sh@1198 -- # local i=0 00:12:57.907 17:23:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:57.907 17:23:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.907 17:23:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:57.907 17:23:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.907 17:23:36 -- common/autotest_common.sh@1210 -- # return 0 00:12:57.907 17:23:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.907 17:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.907 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.907 17:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.907 17:23:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.907 17:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.907 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.907 17:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.907 17:23:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.907 17:23:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.907 17:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.907 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.907 17:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.907 17:23:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.907 17:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.907 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.907 [2024-07-12 17:23:36.467446] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.907 17:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.907 17:23:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.907 17:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.907 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.907 17:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.907 17:23:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.907 17:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.907 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.907 17:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.907 17:23:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.284 17:23:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.284 17:23:37 -- common/autotest_common.sh@1177 -- # local i=0 00:12:59.284 17:23:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.285 17:23:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:59.285 17:23:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:01.187 17:23:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:01.187 17:23:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:01.187 17:23:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.187 17:23:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:01.187 17:23:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.187 17:23:39 -- common/autotest_common.sh@1187 -- # return 0 00:13:01.187 17:23:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.187 17:23:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.187 17:23:39 -- common/autotest_common.sh@1198 -- # local i=0 00:13:01.188 17:23:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:01.188 17:23:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.188 17:23:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:01.188 17:23:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.188 17:23:39 -- common/autotest_common.sh@1210 -- # return 0 00:13:01.188 17:23:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.188 17:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.188 17:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:39 -- target/rpc.sh@99 -- # seq 1 5 00:13:01.188 17:23:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.188 17:23:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.188 17:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.188 17:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 [2024-07-12 17:23:39.991624] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.188 17:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.188 17:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.188 17:23:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 [2024-07-12 17:23:40.043780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.188 17:23:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 [2024-07-12 17:23:40.091944] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.188 17:23:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 [2024-07-12 17:23:40.144142] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.188 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.188 17:23:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.188 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.188 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.446 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.446 17:23:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.446 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.446 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.446 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.446 17:23:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.446 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.446 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.446 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.446 17:23:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.446 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.446 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.446 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.447 17:23:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.447 17:23:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.447 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.447 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.447 17:23:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.447 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.447 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 [2024-07-12 17:23:40.192317] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.447 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.447 17:23:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.447 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.447 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.447 17:23:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.447 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.447 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.447 17:23:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.447 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.447 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.447 17:23:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.447 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.447 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.447 17:23:40 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:01.447 17:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.447 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 17:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.447 17:23:40 -- target/rpc.sh@110 -- # stats='{ 00:13:01.447 "tick_rate": 2200000000, 00:13:01.447 "poll_groups": [ 00:13:01.447 { 00:13:01.447 "name": "nvmf_tgt_poll_group_0", 00:13:01.447 "admin_qpairs": 2, 00:13:01.447 "io_qpairs": 196, 00:13:01.447 "current_admin_qpairs": 0, 00:13:01.447 "current_io_qpairs": 0, 00:13:01.447 "pending_bdev_io": 0, 00:13:01.447 "completed_nvme_io": 247, 00:13:01.447 "transports": [ 00:13:01.447 { 00:13:01.447 "trtype": "TCP" 00:13:01.447 } 00:13:01.447 ] 00:13:01.447 }, 00:13:01.447 { 00:13:01.447 "name": "nvmf_tgt_poll_group_1", 00:13:01.447 "admin_qpairs": 2, 00:13:01.447 "io_qpairs": 196, 00:13:01.447 "current_admin_qpairs": 0, 00:13:01.447 "current_io_qpairs": 0, 00:13:01.447 "pending_bdev_io": 0, 00:13:01.447 "completed_nvme_io": 247, 00:13:01.447 "transports": [ 00:13:01.447 { 00:13:01.447 "trtype": "TCP" 00:13:01.447 } 00:13:01.447 ] 00:13:01.447 }, 00:13:01.447 { 00:13:01.447 "name": "nvmf_tgt_poll_group_2", 00:13:01.447 "admin_qpairs": 1, 00:13:01.447 "io_qpairs": 196, 00:13:01.447 "current_admin_qpairs": 0, 00:13:01.447 "current_io_qpairs": 0, 00:13:01.447 "pending_bdev_io": 0, 00:13:01.447 "completed_nvme_io": 294, 00:13:01.447 "transports": [ 00:13:01.447 { 00:13:01.447 "trtype": "TCP" 00:13:01.447 } 00:13:01.447 ] 00:13:01.447 }, 00:13:01.447 { 00:13:01.447 "name": "nvmf_tgt_poll_group_3", 00:13:01.447 "admin_qpairs": 2, 00:13:01.447 "io_qpairs": 196, 00:13:01.447 "current_admin_qpairs": 0, 00:13:01.447 "current_io_qpairs": 0, 00:13:01.447 "pending_bdev_io": 0, 00:13:01.447 "completed_nvme_io": 346, 00:13:01.447 "transports": [ 00:13:01.447 { 00:13:01.447 "trtype": "TCP" 00:13:01.447 } 00:13:01.447 ] 00:13:01.447 } 00:13:01.447 ] 00:13:01.447 }' 00:13:01.447 17:23:40 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:01.447 17:23:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:01.447 17:23:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:01.447 17:23:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:01.447 17:23:40 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:01.447 17:23:40 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:01.447 17:23:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:01.447 17:23:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:01.447 17:23:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:01.447 17:23:40 -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:13:01.447 17:23:40 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:01.447 17:23:40 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:01.447 17:23:40 -- target/rpc.sh@123 -- # nvmftestfini 00:13:01.447 17:23:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:01.447 17:23:40 -- nvmf/common.sh@116 -- # sync 00:13:01.447 17:23:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:01.447 17:23:40 -- nvmf/common.sh@119 -- # set +e 00:13:01.447 17:23:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:01.447 17:23:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:01.447 rmmod nvme_tcp 00:13:01.447 rmmod nvme_fabrics 00:13:01.447 rmmod nvme_keyring 00:13:01.447 17:23:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:01.447 17:23:40 -- nvmf/common.sh@123 -- # set -e 00:13:01.447 17:23:40 -- nvmf/common.sh@124 -- # return 0 00:13:01.447 17:23:40 -- nvmf/common.sh@477 -- # '[' -n 4018742 ']' 00:13:01.447 17:23:40 -- nvmf/common.sh@478 -- # killprocess 4018742 00:13:01.447 17:23:40 -- common/autotest_common.sh@926 -- # '[' -z 4018742 ']' 00:13:01.447 17:23:40 -- common/autotest_common.sh@930 -- # kill -0 4018742 00:13:01.447 17:23:40 -- common/autotest_common.sh@931 -- # uname 00:13:01.447 17:23:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:01.447 17:23:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4018742 00:13:01.705 17:23:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:01.705 17:23:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:01.705 17:23:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4018742' 00:13:01.705 killing process with pid 4018742 00:13:01.705 17:23:40 -- common/autotest_common.sh@945 -- # kill 4018742 00:13:01.705 17:23:40 -- common/autotest_common.sh@950 -- # wait 4018742 00:13:01.705 17:23:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:01.705 17:23:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:01.705 17:23:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:01.705 17:23:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.705 17:23:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:01.705 17:23:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.705 17:23:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.705 17:23:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.236 17:23:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:04.236 00:13:04.236 real 0m34.303s 00:13:04.236 user 1m46.617s 00:13:04.236 sys 0m6.088s 00:13:04.236 17:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.236 17:23:42 -- common/autotest_common.sh@10 -- # set +x 00:13:04.236 ************************************ 00:13:04.236 END TEST nvmf_rpc 00:13:04.236 ************************************ 00:13:04.236 17:23:42 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:04.236 17:23:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:04.236 17:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:04.236 17:23:42 -- common/autotest_common.sh@10 -- # set +x 00:13:04.236 ************************************ 00:13:04.236 START TEST nvmf_invalid 00:13:04.236 ************************************ 00:13:04.236 17:23:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:04.236 * Looking for test storage... 00:13:04.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.236 17:23:42 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.236 17:23:42 -- nvmf/common.sh@7 -- # uname -s 00:13:04.236 17:23:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.236 17:23:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.236 17:23:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.236 17:23:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.236 17:23:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.236 17:23:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.236 17:23:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.236 17:23:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.236 17:23:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.236 17:23:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.236 17:23:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:04.236 17:23:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:04.236 17:23:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.236 17:23:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.236 17:23:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.236 17:23:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.236 17:23:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.236 17:23:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.236 17:23:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.236 17:23:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.236 17:23:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.236 17:23:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.236 17:23:42 -- paths/export.sh@5 -- # export PATH 00:13:04.236 17:23:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.236 17:23:42 -- nvmf/common.sh@46 -- # : 0 00:13:04.236 17:23:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:04.236 17:23:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:04.236 17:23:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:04.236 17:23:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.236 17:23:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.236 17:23:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:04.236 17:23:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:04.236 17:23:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:04.236 17:23:42 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:04.236 17:23:42 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.236 17:23:42 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:04.236 17:23:42 -- target/invalid.sh@14 -- # target=foobar 00:13:04.236 17:23:42 -- target/invalid.sh@16 -- # RANDOM=0 00:13:04.236 17:23:42 -- target/invalid.sh@34 -- # nvmftestinit 00:13:04.236 17:23:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:04.236 17:23:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.236 17:23:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:04.236 17:23:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:04.236 17:23:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:04.236 17:23:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.236 17:23:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.236 17:23:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.236 17:23:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:04.236 17:23:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:04.236 17:23:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:04.236 17:23:42 -- common/autotest_common.sh@10 -- # set +x 00:13:09.506 17:23:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:09.506 17:23:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:09.506 17:23:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:09.506 17:23:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:09.506 17:23:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:09.506 17:23:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:09.506 17:23:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:09.506 17:23:48 -- nvmf/common.sh@294 -- # net_devs=() 00:13:09.506 17:23:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:09.506 17:23:48 -- nvmf/common.sh@295 -- # e810=() 00:13:09.506 17:23:48 -- nvmf/common.sh@295 -- # local -ga e810 00:13:09.506 17:23:48 -- nvmf/common.sh@296 -- # x722=() 00:13:09.506 17:23:48 -- nvmf/common.sh@296 -- # local -ga x722 00:13:09.506 17:23:48 -- nvmf/common.sh@297 -- # mlx=() 00:13:09.506 17:23:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:09.506 17:23:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.506 17:23:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:09.506 17:23:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:09.506 17:23:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:09.506 17:23:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:09.506 17:23:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:09.506 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:09.506 17:23:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:09.506 17:23:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:09.506 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:09.506 17:23:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:09.506 17:23:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:09.506 17:23:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.506 17:23:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:09.506 17:23:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.506 17:23:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:09.506 Found net devices under 0000:af:00.0: cvl_0_0 00:13:09.506 17:23:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.506 17:23:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:09.506 17:23:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.506 17:23:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:09.506 17:23:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.506 17:23:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:09.506 Found net devices under 0000:af:00.1: cvl_0_1 00:13:09.506 17:23:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.506 17:23:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:09.506 17:23:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:09.506 17:23:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:09.506 17:23:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.506 17:23:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.506 17:23:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.506 17:23:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:09.506 17:23:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.506 17:23:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.506 17:23:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:09.506 17:23:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.506 17:23:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.506 17:23:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:09.506 17:23:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:09.506 17:23:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.506 17:23:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.506 17:23:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.506 17:23:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.506 17:23:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:09.506 17:23:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.506 17:23:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.506 17:23:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.506 17:23:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:09.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:13:09.506 00:13:09.506 --- 10.0.0.2 ping statistics --- 00:13:09.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.506 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:09.506 17:23:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:13:09.506 00:13:09.506 --- 10.0.0.1 ping statistics --- 00:13:09.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.506 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:09.506 17:23:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.506 17:23:48 -- nvmf/common.sh@410 -- # return 0 00:13:09.506 17:23:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:09.506 17:23:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.506 17:23:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:09.506 17:23:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.506 17:23:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:09.506 17:23:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:09.506 17:23:48 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:09.506 17:23:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:09.506 17:23:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.507 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:09.507 17:23:48 -- nvmf/common.sh@469 -- # nvmfpid=4027276 00:13:09.507 17:23:48 -- nvmf/common.sh@470 -- # waitforlisten 4027276 00:13:09.507 17:23:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.507 17:23:48 -- common/autotest_common.sh@819 -- # '[' -z 4027276 ']' 00:13:09.507 17:23:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.507 17:23:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.507 17:23:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.507 17:23:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.507 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:13:09.765 [2024-07-12 17:23:48.502551] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:09.765 [2024-07-12 17:23:48.502607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.765 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.765 [2024-07-12 17:23:48.589494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.765 [2024-07-12 17:23:48.631319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:09.765 [2024-07-12 17:23:48.631466] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.765 [2024-07-12 17:23:48.631477] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.765 [2024-07-12 17:23:48.631486] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.765 [2024-07-12 17:23:48.631583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.765 [2024-07-12 17:23:48.631683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.765 [2024-07-12 17:23:48.631753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.765 [2024-07-12 17:23:48.631756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.702 17:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.702 17:23:49 -- common/autotest_common.sh@852 -- # return 0 00:13:10.702 17:23:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:10.702 17:23:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:10.702 17:23:49 -- common/autotest_common.sh@10 -- # set +x 00:13:10.702 17:23:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.702 17:23:49 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.702 17:23:49 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16631 00:13:10.961 [2024-07-12 17:23:49.690942] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:10.961 17:23:49 -- target/invalid.sh@40 -- # out='request: 00:13:10.961 { 00:13:10.961 "nqn": "nqn.2016-06.io.spdk:cnode16631", 00:13:10.961 "tgt_name": "foobar", 00:13:10.961 "method": "nvmf_create_subsystem", 00:13:10.961 "req_id": 1 00:13:10.961 } 00:13:10.961 Got JSON-RPC error response 00:13:10.961 response: 00:13:10.961 { 00:13:10.961 "code": -32603, 00:13:10.961 "message": "Unable to find target foobar" 00:13:10.961 }' 00:13:10.961 17:23:49 -- target/invalid.sh@41 -- # [[ request: 00:13:10.961 { 00:13:10.961 "nqn": "nqn.2016-06.io.spdk:cnode16631", 00:13:10.961 "tgt_name": "foobar", 00:13:10.961 "method": "nvmf_create_subsystem", 00:13:10.961 "req_id": 1 00:13:10.961 } 00:13:10.961 Got JSON-RPC error response 00:13:10.961 response: 00:13:10.961 { 00:13:10.961 "code": -32603, 00:13:10.961 "message": "Unable to find target foobar" 00:13:10.961 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:10.961 17:23:49 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:10.961 17:23:49 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24511 00:13:10.961 [2024-07-12 17:23:49.867590] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24511: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:10.961 17:23:49 -- target/invalid.sh@45 -- # out='request: 00:13:10.961 { 00:13:10.961 "nqn": "nqn.2016-06.io.spdk:cnode24511", 00:13:10.961 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:10.961 "method": "nvmf_create_subsystem", 00:13:10.961 "req_id": 1 00:13:10.961 } 00:13:10.961 Got JSON-RPC error response 00:13:10.961 response: 00:13:10.961 { 00:13:10.961 "code": -32602, 00:13:10.961 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:10.961 }' 00:13:10.961 17:23:49 -- target/invalid.sh@46 -- # [[ request: 00:13:10.961 { 00:13:10.961 "nqn": "nqn.2016-06.io.spdk:cnode24511", 00:13:10.961 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:10.961 "method": "nvmf_create_subsystem", 00:13:10.961 "req_id": 1 00:13:10.961 } 00:13:10.961 Got JSON-RPC error response 00:13:10.961 response: 00:13:10.961 { 00:13:10.961 "code": -32602, 00:13:10.961 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:10.961 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:10.961 17:23:49 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:10.961 17:23:49 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9939 00:13:11.221 [2024-07-12 17:23:50.048205] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9939: invalid model number 'SPDK_Controller' 00:13:11.221 17:23:50 -- target/invalid.sh@50 -- # out='request: 00:13:11.221 { 00:13:11.221 "nqn": "nqn.2016-06.io.spdk:cnode9939", 00:13:11.221 "model_number": "SPDK_Controller\u001f", 00:13:11.221 "method": "nvmf_create_subsystem", 00:13:11.221 "req_id": 1 00:13:11.221 } 00:13:11.221 Got JSON-RPC error response 00:13:11.221 response: 00:13:11.221 { 00:13:11.221 "code": -32602, 00:13:11.221 "message": "Invalid MN SPDK_Controller\u001f" 00:13:11.221 }' 00:13:11.221 17:23:50 -- target/invalid.sh@51 -- # [[ request: 00:13:11.221 { 00:13:11.221 "nqn": "nqn.2016-06.io.spdk:cnode9939", 00:13:11.221 "model_number": "SPDK_Controller\u001f", 00:13:11.221 "method": "nvmf_create_subsystem", 00:13:11.221 "req_id": 1 00:13:11.221 } 00:13:11.221 Got JSON-RPC error response 00:13:11.221 response: 00:13:11.221 { 00:13:11.221 "code": -32602, 00:13:11.221 "message": "Invalid MN SPDK_Controller\u001f" 00:13:11.221 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:11.221 17:23:50 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:11.221 17:23:50 -- target/invalid.sh@19 -- # local length=21 ll 00:13:11.221 17:23:50 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:11.221 17:23:50 -- target/invalid.sh@21 -- # local chars 00:13:11.221 17:23:50 -- target/invalid.sh@22 -- # local string 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 49 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=1 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 86 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=V 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 93 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=']' 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 72 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=H 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 77 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=M 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 123 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+='{' 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 42 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+='*' 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 79 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=O 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 98 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=b 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 59 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=';' 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 113 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=q 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 52 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=4 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 116 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=t 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 90 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=Z 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 69 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=E 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 122 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # string+=z 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.221 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.221 17:23:50 -- target/invalid.sh@25 -- # printf %x 119 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=w 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 76 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=L 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 89 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=Y 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 107 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=k 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 34 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+='"' 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:13:11.491 17:23:50 -- target/invalid.sh@31 -- # echo '1V]HM{*Ob;q4tZEzwLYk"' 00:13:11.491 17:23:50 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '1V]HM{*Ob;q4tZEzwLYk"' nqn.2016-06.io.spdk:cnode21161 00:13:11.491 [2024-07-12 17:23:50.349251] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21161: invalid serial number '1V]HM{*Ob;q4tZEzwLYk"' 00:13:11.491 17:23:50 -- target/invalid.sh@54 -- # out='request: 00:13:11.491 { 00:13:11.491 "nqn": "nqn.2016-06.io.spdk:cnode21161", 00:13:11.491 "serial_number": "1V]HM{*Ob;q4tZEzwLYk\"", 00:13:11.491 "method": "nvmf_create_subsystem", 00:13:11.491 "req_id": 1 00:13:11.491 } 00:13:11.491 Got JSON-RPC error response 00:13:11.491 response: 00:13:11.491 { 00:13:11.491 "code": -32602, 00:13:11.491 "message": "Invalid SN 1V]HM{*Ob;q4tZEzwLYk\"" 00:13:11.491 }' 00:13:11.491 17:23:50 -- target/invalid.sh@55 -- # [[ request: 00:13:11.491 { 00:13:11.491 "nqn": "nqn.2016-06.io.spdk:cnode21161", 00:13:11.491 "serial_number": "1V]HM{*Ob;q4tZEzwLYk\"", 00:13:11.491 "method": "nvmf_create_subsystem", 00:13:11.491 "req_id": 1 00:13:11.491 } 00:13:11.491 Got JSON-RPC error response 00:13:11.491 response: 00:13:11.491 { 00:13:11.491 "code": -32602, 00:13:11.491 "message": "Invalid SN 1V]HM{*Ob;q4tZEzwLYk\"" 00:13:11.491 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:11.491 17:23:50 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:11.491 17:23:50 -- target/invalid.sh@19 -- # local length=41 ll 00:13:11.491 17:23:50 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:11.491 17:23:50 -- target/invalid.sh@21 -- # local chars 00:13:11.491 17:23:50 -- target/invalid.sh@22 -- # local string 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 95 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=_ 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 120 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=x 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 102 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=f 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 45 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=- 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 92 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+='\' 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 82 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=R 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 71 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=G 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 69 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=E 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 33 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+='!' 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 35 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+='#' 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # printf %x 50 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:11.491 17:23:50 -- target/invalid.sh@25 -- # string+=2 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.491 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # printf %x 126 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # string+='~' 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # printf %x 53 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # string+=5 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # printf %x 100 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # string+=d 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # printf %x 37 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # string+=% 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # printf %x 99 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # string+=c 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # printf %x 93 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # string+=']' 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # printf %x 74 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # string+=J 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # printf %x 110 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:11.755 17:23:50 -- target/invalid.sh@25 -- # string+=n 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.755 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 93 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=']' 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 76 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=L 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 40 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+='(' 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 56 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=8 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 52 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=4 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 104 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=h 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 42 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+='*' 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 66 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=B 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 123 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+='{' 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 90 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=Z 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 80 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=P 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 120 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=x 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 82 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=R 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 114 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=r 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 118 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=v 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 81 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=Q 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 64 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=@ 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 89 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=Y 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 84 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=T 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 34 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+='"' 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 127 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # printf %x 112 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:11.756 17:23:50 -- target/invalid.sh@25 -- # string+=p 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.756 17:23:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.756 17:23:50 -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:13:11.756 17:23:50 -- target/invalid.sh@31 -- # echo '_xf-\RGE!#2~5d%c]Jn]L(84h*B{ZPxRrvQ@YT"p' 00:13:11.756 17:23:50 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_xf-\RGE!#2~5d%c]Jn]L(84h*B{ZPxRrvQ@YT"p' nqn.2016-06.io.spdk:cnode23430 00:13:12.014 [2024-07-12 17:23:50.863143] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23430: invalid model number '_xf-\RGE!#2~5d%c]Jn]L(84h*B{ZPxRrvQ@YT"p' 00:13:12.014 17:23:50 -- target/invalid.sh@58 -- # out='request: 00:13:12.014 { 00:13:12.014 "nqn": "nqn.2016-06.io.spdk:cnode23430", 00:13:12.014 "model_number": "_xf-\\RGE!#2~5d%c]Jn]L(84h*B{ZPxRrvQ@YT\"\u007fp", 00:13:12.014 "method": "nvmf_create_subsystem", 00:13:12.014 "req_id": 1 00:13:12.014 } 00:13:12.014 Got JSON-RPC error response 00:13:12.014 response: 00:13:12.014 { 00:13:12.014 "code": -32602, 00:13:12.014 "message": "Invalid MN _xf-\\RGE!#2~5d%c]Jn]L(84h*B{ZPxRrvQ@YT\"\u007fp" 00:13:12.014 }' 00:13:12.014 17:23:50 -- target/invalid.sh@59 -- # [[ request: 00:13:12.014 { 00:13:12.014 "nqn": "nqn.2016-06.io.spdk:cnode23430", 00:13:12.014 "model_number": "_xf-\\RGE!#2~5d%c]Jn]L(84h*B{ZPxRrvQ@YT\"\u007fp", 00:13:12.014 "method": "nvmf_create_subsystem", 00:13:12.014 "req_id": 1 00:13:12.014 } 00:13:12.014 Got JSON-RPC error response 00:13:12.014 response: 00:13:12.014 { 00:13:12.014 "code": -32602, 00:13:12.014 "message": "Invalid MN _xf-\\RGE!#2~5d%c]Jn]L(84h*B{ZPxRrvQ@YT\"\u007fp" 00:13:12.014 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:12.014 17:23:50 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:12.272 [2024-07-12 17:23:51.116111] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.272 17:23:51 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:12.530 17:23:51 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:12.530 17:23:51 -- target/invalid.sh@67 -- # echo '' 00:13:12.530 17:23:51 -- target/invalid.sh@67 -- # head -n 1 00:13:12.530 17:23:51 -- target/invalid.sh@67 -- # IP= 00:13:12.530 17:23:51 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:12.788 [2024-07-12 17:23:51.622001] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:12.788 17:23:51 -- target/invalid.sh@69 -- # out='request: 00:13:12.788 { 00:13:12.788 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:12.788 "listen_address": { 00:13:12.788 "trtype": "tcp", 00:13:12.788 "traddr": "", 00:13:12.788 "trsvcid": "4421" 00:13:12.788 }, 00:13:12.788 "method": "nvmf_subsystem_remove_listener", 00:13:12.788 "req_id": 1 00:13:12.788 } 00:13:12.788 Got JSON-RPC error response 00:13:12.788 response: 00:13:12.788 { 00:13:12.788 "code": -32602, 00:13:12.788 "message": "Invalid parameters" 00:13:12.788 }' 00:13:12.788 17:23:51 -- target/invalid.sh@70 -- # [[ request: 00:13:12.788 { 00:13:12.788 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:12.788 "listen_address": { 00:13:12.788 "trtype": "tcp", 00:13:12.788 "traddr": "", 00:13:12.788 "trsvcid": "4421" 00:13:12.788 }, 00:13:12.788 "method": "nvmf_subsystem_remove_listener", 00:13:12.788 "req_id": 1 00:13:12.788 } 00:13:12.788 Got JSON-RPC error response 00:13:12.788 response: 00:13:12.788 { 00:13:12.788 "code": -32602, 00:13:12.788 "message": "Invalid parameters" 00:13:12.788 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:12.788 17:23:51 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9076 -i 0 00:13:13.047 [2024-07-12 17:23:51.878970] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9076: invalid cntlid range [0-65519] 00:13:13.047 17:23:51 -- target/invalid.sh@73 -- # out='request: 00:13:13.047 { 00:13:13.047 "nqn": "nqn.2016-06.io.spdk:cnode9076", 00:13:13.047 "min_cntlid": 0, 00:13:13.047 "method": "nvmf_create_subsystem", 00:13:13.048 "req_id": 1 00:13:13.048 } 00:13:13.048 Got JSON-RPC error response 00:13:13.048 response: 00:13:13.048 { 00:13:13.048 "code": -32602, 00:13:13.048 "message": "Invalid cntlid range [0-65519]" 00:13:13.048 }' 00:13:13.048 17:23:51 -- target/invalid.sh@74 -- # [[ request: 00:13:13.048 { 00:13:13.048 "nqn": "nqn.2016-06.io.spdk:cnode9076", 00:13:13.048 "min_cntlid": 0, 00:13:13.048 "method": "nvmf_create_subsystem", 00:13:13.048 "req_id": 1 00:13:13.048 } 00:13:13.048 Got JSON-RPC error response 00:13:13.048 response: 00:13:13.048 { 00:13:13.048 "code": -32602, 00:13:13.048 "message": "Invalid cntlid range [0-65519]" 00:13:13.048 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.048 17:23:51 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26323 -i 65520 00:13:13.306 [2024-07-12 17:23:52.127841] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26323: invalid cntlid range [65520-65519] 00:13:13.306 17:23:52 -- target/invalid.sh@75 -- # out='request: 00:13:13.306 { 00:13:13.306 "nqn": "nqn.2016-06.io.spdk:cnode26323", 00:13:13.306 "min_cntlid": 65520, 00:13:13.306 "method": "nvmf_create_subsystem", 00:13:13.306 "req_id": 1 00:13:13.306 } 00:13:13.306 Got JSON-RPC error response 00:13:13.306 response: 00:13:13.306 { 00:13:13.306 "code": -32602, 00:13:13.306 "message": "Invalid cntlid range [65520-65519]" 00:13:13.306 }' 00:13:13.306 17:23:52 -- target/invalid.sh@76 -- # [[ request: 00:13:13.306 { 00:13:13.306 "nqn": "nqn.2016-06.io.spdk:cnode26323", 00:13:13.306 "min_cntlid": 65520, 00:13:13.306 "method": "nvmf_create_subsystem", 00:13:13.306 "req_id": 1 00:13:13.306 } 00:13:13.306 Got JSON-RPC error response 00:13:13.306 response: 00:13:13.306 { 00:13:13.306 "code": -32602, 00:13:13.306 "message": "Invalid cntlid range [65520-65519]" 00:13:13.306 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.306 17:23:52 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26751 -I 0 00:13:13.564 [2024-07-12 17:23:52.384797] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26751: invalid cntlid range [1-0] 00:13:13.564 17:23:52 -- target/invalid.sh@77 -- # out='request: 00:13:13.564 { 00:13:13.565 "nqn": "nqn.2016-06.io.spdk:cnode26751", 00:13:13.565 "max_cntlid": 0, 00:13:13.565 "method": "nvmf_create_subsystem", 00:13:13.565 "req_id": 1 00:13:13.565 } 00:13:13.565 Got JSON-RPC error response 00:13:13.565 response: 00:13:13.565 { 00:13:13.565 "code": -32602, 00:13:13.565 "message": "Invalid cntlid range [1-0]" 00:13:13.565 }' 00:13:13.565 17:23:52 -- target/invalid.sh@78 -- # [[ request: 00:13:13.565 { 00:13:13.565 "nqn": "nqn.2016-06.io.spdk:cnode26751", 00:13:13.565 "max_cntlid": 0, 00:13:13.565 "method": "nvmf_create_subsystem", 00:13:13.565 "req_id": 1 00:13:13.565 } 00:13:13.565 Got JSON-RPC error response 00:13:13.565 response: 00:13:13.565 { 00:13:13.565 "code": -32602, 00:13:13.565 "message": "Invalid cntlid range [1-0]" 00:13:13.565 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.565 17:23:52 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17679 -I 65520 00:13:13.823 [2024-07-12 17:23:52.633718] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17679: invalid cntlid range [1-65520] 00:13:13.823 17:23:52 -- target/invalid.sh@79 -- # out='request: 00:13:13.823 { 00:13:13.823 "nqn": "nqn.2016-06.io.spdk:cnode17679", 00:13:13.823 "max_cntlid": 65520, 00:13:13.823 "method": "nvmf_create_subsystem", 00:13:13.823 "req_id": 1 00:13:13.823 } 00:13:13.823 Got JSON-RPC error response 00:13:13.823 response: 00:13:13.823 { 00:13:13.823 "code": -32602, 00:13:13.823 "message": "Invalid cntlid range [1-65520]" 00:13:13.823 }' 00:13:13.823 17:23:52 -- target/invalid.sh@80 -- # [[ request: 00:13:13.823 { 00:13:13.823 "nqn": "nqn.2016-06.io.spdk:cnode17679", 00:13:13.823 "max_cntlid": 65520, 00:13:13.823 "method": "nvmf_create_subsystem", 00:13:13.823 "req_id": 1 00:13:13.823 } 00:13:13.823 Got JSON-RPC error response 00:13:13.823 response: 00:13:13.823 { 00:13:13.823 "code": -32602, 00:13:13.823 "message": "Invalid cntlid range [1-65520]" 00:13:13.823 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.823 17:23:52 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15852 -i 6 -I 5 00:13:14.085 [2024-07-12 17:23:52.886676] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15852: invalid cntlid range [6-5] 00:13:14.085 17:23:52 -- target/invalid.sh@83 -- # out='request: 00:13:14.085 { 00:13:14.085 "nqn": "nqn.2016-06.io.spdk:cnode15852", 00:13:14.085 "min_cntlid": 6, 00:13:14.085 "max_cntlid": 5, 00:13:14.085 "method": "nvmf_create_subsystem", 00:13:14.085 "req_id": 1 00:13:14.085 } 00:13:14.085 Got JSON-RPC error response 00:13:14.085 response: 00:13:14.085 { 00:13:14.085 "code": -32602, 00:13:14.085 "message": "Invalid cntlid range [6-5]" 00:13:14.085 }' 00:13:14.085 17:23:52 -- target/invalid.sh@84 -- # [[ request: 00:13:14.085 { 00:13:14.085 "nqn": "nqn.2016-06.io.spdk:cnode15852", 00:13:14.085 "min_cntlid": 6, 00:13:14.085 "max_cntlid": 5, 00:13:14.085 "method": "nvmf_create_subsystem", 00:13:14.085 "req_id": 1 00:13:14.085 } 00:13:14.085 Got JSON-RPC error response 00:13:14.085 response: 00:13:14.085 { 00:13:14.085 "code": -32602, 00:13:14.085 "message": "Invalid cntlid range [6-5]" 00:13:14.085 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.085 17:23:52 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:14.343 17:23:53 -- target/invalid.sh@87 -- # out='request: 00:13:14.343 { 00:13:14.343 "name": "foobar", 00:13:14.343 "method": "nvmf_delete_target", 00:13:14.343 "req_id": 1 00:13:14.343 } 00:13:14.343 Got JSON-RPC error response 00:13:14.343 response: 00:13:14.343 { 00:13:14.343 "code": -32602, 00:13:14.343 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:14.343 }' 00:13:14.343 17:23:53 -- target/invalid.sh@88 -- # [[ request: 00:13:14.343 { 00:13:14.343 "name": "foobar", 00:13:14.343 "method": "nvmf_delete_target", 00:13:14.343 "req_id": 1 00:13:14.343 } 00:13:14.343 Got JSON-RPC error response 00:13:14.343 response: 00:13:14.343 { 00:13:14.343 "code": -32602, 00:13:14.343 "message": "The specified target doesn't exist, cannot delete it." 00:13:14.343 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:14.343 17:23:53 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:14.343 17:23:53 -- target/invalid.sh@91 -- # nvmftestfini 00:13:14.343 17:23:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:14.343 17:23:53 -- nvmf/common.sh@116 -- # sync 00:13:14.343 17:23:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:14.343 17:23:53 -- nvmf/common.sh@119 -- # set +e 00:13:14.343 17:23:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:14.343 17:23:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:14.343 rmmod nvme_tcp 00:13:14.343 rmmod nvme_fabrics 00:13:14.343 rmmod nvme_keyring 00:13:14.343 17:23:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:14.343 17:23:53 -- nvmf/common.sh@123 -- # set -e 00:13:14.343 17:23:53 -- nvmf/common.sh@124 -- # return 0 00:13:14.343 17:23:53 -- nvmf/common.sh@477 -- # '[' -n 4027276 ']' 00:13:14.343 17:23:53 -- nvmf/common.sh@478 -- # killprocess 4027276 00:13:14.343 17:23:53 -- common/autotest_common.sh@926 -- # '[' -z 4027276 ']' 00:13:14.343 17:23:53 -- common/autotest_common.sh@930 -- # kill -0 4027276 00:13:14.343 17:23:53 -- common/autotest_common.sh@931 -- # uname 00:13:14.343 17:23:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:14.343 17:23:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4027276 00:13:14.343 17:23:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:14.343 17:23:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:14.343 17:23:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4027276' 00:13:14.343 killing process with pid 4027276 00:13:14.343 17:23:53 -- common/autotest_common.sh@945 -- # kill 4027276 00:13:14.343 17:23:53 -- common/autotest_common.sh@950 -- # wait 4027276 00:13:14.601 17:23:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:14.601 17:23:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:14.601 17:23:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:14.601 17:23:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.601 17:23:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:14.601 17:23:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.601 17:23:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.601 17:23:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.507 17:23:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:16.507 00:13:16.507 real 0m12.667s 00:13:16.507 user 0m23.412s 00:13:16.507 sys 0m5.202s 00:13:16.507 17:23:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.507 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:13:16.507 ************************************ 00:13:16.507 END TEST nvmf_invalid 00:13:16.507 ************************************ 00:13:16.507 17:23:55 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:16.507 17:23:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:16.507 17:23:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:16.507 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:13:16.507 ************************************ 00:13:16.507 START TEST nvmf_abort 00:13:16.507 ************************************ 00:13:16.507 17:23:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:16.766 * Looking for test storage... 00:13:16.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.766 17:23:55 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.766 17:23:55 -- nvmf/common.sh@7 -- # uname -s 00:13:16.766 17:23:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.766 17:23:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.766 17:23:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.766 17:23:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.766 17:23:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.766 17:23:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.766 17:23:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.766 17:23:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.766 17:23:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.766 17:23:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.766 17:23:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:16.766 17:23:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:16.766 17:23:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.766 17:23:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.766 17:23:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.766 17:23:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.766 17:23:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.766 17:23:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.766 17:23:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.766 17:23:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.766 17:23:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.766 17:23:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.766 17:23:55 -- paths/export.sh@5 -- # export PATH 00:13:16.766 17:23:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.766 17:23:55 -- nvmf/common.sh@46 -- # : 0 00:13:16.766 17:23:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:16.766 17:23:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:16.766 17:23:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:16.766 17:23:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.766 17:23:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.766 17:23:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:16.766 17:23:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:16.766 17:23:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:16.766 17:23:55 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:16.766 17:23:55 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:16.766 17:23:55 -- target/abort.sh@14 -- # nvmftestinit 00:13:16.766 17:23:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:16.766 17:23:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.766 17:23:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:16.766 17:23:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:16.766 17:23:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:16.766 17:23:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.766 17:23:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.766 17:23:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.766 17:23:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:16.766 17:23:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:16.766 17:23:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:16.766 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:13:22.094 17:24:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:22.094 17:24:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:22.094 17:24:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:22.094 17:24:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:22.094 17:24:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:22.094 17:24:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:22.094 17:24:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:22.094 17:24:01 -- nvmf/common.sh@294 -- # net_devs=() 00:13:22.094 17:24:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:22.094 17:24:01 -- nvmf/common.sh@295 -- # e810=() 00:13:22.094 17:24:01 -- nvmf/common.sh@295 -- # local -ga e810 00:13:22.094 17:24:01 -- nvmf/common.sh@296 -- # x722=() 00:13:22.094 17:24:01 -- nvmf/common.sh@296 -- # local -ga x722 00:13:22.094 17:24:01 -- nvmf/common.sh@297 -- # mlx=() 00:13:22.094 17:24:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:22.094 17:24:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.094 17:24:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:22.094 17:24:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:22.094 17:24:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:22.094 17:24:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:22.094 17:24:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:22.094 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:22.094 17:24:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:22.094 17:24:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:22.094 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:22.094 17:24:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:22.094 17:24:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:22.094 17:24:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.094 17:24:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:22.094 17:24:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.094 17:24:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:22.094 Found net devices under 0000:af:00.0: cvl_0_0 00:13:22.094 17:24:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.094 17:24:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:22.094 17:24:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.094 17:24:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:22.094 17:24:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.094 17:24:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:22.094 Found net devices under 0000:af:00.1: cvl_0_1 00:13:22.094 17:24:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.094 17:24:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:22.094 17:24:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:22.094 17:24:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:22.094 17:24:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:22.094 17:24:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.094 17:24:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.094 17:24:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.094 17:24:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:22.094 17:24:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.094 17:24:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.094 17:24:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:22.094 17:24:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.094 17:24:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.094 17:24:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:22.094 17:24:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:22.094 17:24:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.094 17:24:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.354 17:24:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.354 17:24:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.354 17:24:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:22.354 17:24:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.354 17:24:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.354 17:24:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.354 17:24:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:22.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:22.354 00:13:22.354 --- 10.0.0.2 ping statistics --- 00:13:22.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.354 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:22.354 17:24:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:13:22.354 00:13:22.354 --- 10.0.0.1 ping statistics --- 00:13:22.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.354 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:22.354 17:24:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.354 17:24:01 -- nvmf/common.sh@410 -- # return 0 00:13:22.354 17:24:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:22.354 17:24:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.354 17:24:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:22.354 17:24:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:22.354 17:24:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.354 17:24:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:22.354 17:24:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:22.613 17:24:01 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:22.614 17:24:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:22.614 17:24:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:22.614 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:13:22.614 17:24:01 -- nvmf/common.sh@469 -- # nvmfpid=4032063 00:13:22.614 17:24:01 -- nvmf/common.sh@470 -- # waitforlisten 4032063 00:13:22.614 17:24:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:22.614 17:24:01 -- common/autotest_common.sh@819 -- # '[' -z 4032063 ']' 00:13:22.614 17:24:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.614 17:24:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:22.614 17:24:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.614 17:24:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:22.614 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:13:22.614 [2024-07-12 17:24:01.380695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:22.614 [2024-07-12 17:24:01.380748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.614 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.614 [2024-07-12 17:24:01.461014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.614 [2024-07-12 17:24:01.502665] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.614 [2024-07-12 17:24:01.502810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.614 [2024-07-12 17:24:01.502821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.614 [2024-07-12 17:24:01.502831] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.614 [2024-07-12 17:24:01.502934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.614 [2024-07-12 17:24:01.502954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.614 [2024-07-12 17:24:01.506269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.550 17:24:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:23.551 17:24:02 -- common/autotest_common.sh@852 -- # return 0 00:13:23.551 17:24:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:23.551 17:24:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:23.551 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 17:24:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.551 17:24:02 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:23.551 17:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.551 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 [2024-07-12 17:24:02.357289] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.551 17:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.551 17:24:02 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:23.551 17:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.551 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 Malloc0 00:13:23.551 17:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.551 17:24:02 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:23.551 17:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.551 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 Delay0 00:13:23.551 17:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.551 17:24:02 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:23.551 17:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.551 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 17:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.551 17:24:02 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:23.551 17:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.551 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 17:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.551 17:24:02 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:23.551 17:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.551 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 [2024-07-12 17:24:02.436150] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.551 17:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.551 17:24:02 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.551 17:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.551 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 17:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.551 17:24:02 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:23.551 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.810 [2024-07-12 17:24:02.553118] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:25.712 Initializing NVMe Controllers 00:13:25.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:25.712 controller IO queue size 128 less than required 00:13:25.712 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:25.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:25.712 Initialization complete. Launching workers. 00:13:25.712 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28733 00:13:25.712 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28794, failed to submit 62 00:13:25.712 success 28733, unsuccess 61, failed 0 00:13:25.712 17:24:04 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:25.712 17:24:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.712 17:24:04 -- common/autotest_common.sh@10 -- # set +x 00:13:25.712 17:24:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.712 17:24:04 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:25.712 17:24:04 -- target/abort.sh@38 -- # nvmftestfini 00:13:25.712 17:24:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:25.712 17:24:04 -- nvmf/common.sh@116 -- # sync 00:13:25.712 17:24:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:25.712 17:24:04 -- nvmf/common.sh@119 -- # set +e 00:13:25.712 17:24:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:25.712 17:24:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:25.712 rmmod nvme_tcp 00:13:25.712 rmmod nvme_fabrics 00:13:25.971 rmmod nvme_keyring 00:13:25.971 17:24:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:25.971 17:24:04 -- nvmf/common.sh@123 -- # set -e 00:13:25.971 17:24:04 -- nvmf/common.sh@124 -- # return 0 00:13:25.971 17:24:04 -- nvmf/common.sh@477 -- # '[' -n 4032063 ']' 00:13:25.971 17:24:04 -- nvmf/common.sh@478 -- # killprocess 4032063 00:13:25.971 17:24:04 -- common/autotest_common.sh@926 -- # '[' -z 4032063 ']' 00:13:25.971 17:24:04 -- common/autotest_common.sh@930 -- # kill -0 4032063 00:13:25.971 17:24:04 -- common/autotest_common.sh@931 -- # uname 00:13:25.971 17:24:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:25.971 17:24:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4032063 00:13:25.971 17:24:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:25.971 17:24:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:25.971 17:24:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4032063' 00:13:25.971 killing process with pid 4032063 00:13:25.971 17:24:04 -- common/autotest_common.sh@945 -- # kill 4032063 00:13:25.971 17:24:04 -- common/autotest_common.sh@950 -- # wait 4032063 00:13:26.230 17:24:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:26.230 17:24:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:26.230 17:24:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:26.230 17:24:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.230 17:24:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:26.230 17:24:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.230 17:24:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.230 17:24:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.132 17:24:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:28.132 00:13:28.132 real 0m11.557s 00:13:28.132 user 0m13.742s 00:13:28.132 sys 0m5.246s 00:13:28.132 17:24:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.132 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:28.132 ************************************ 00:13:28.132 END TEST nvmf_abort 00:13:28.132 ************************************ 00:13:28.132 17:24:07 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:28.132 17:24:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:28.132 17:24:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.132 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:28.132 ************************************ 00:13:28.132 START TEST nvmf_ns_hotplug_stress 00:13:28.132 ************************************ 00:13:28.132 17:24:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:28.391 * Looking for test storage... 00:13:28.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.391 17:24:07 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.391 17:24:07 -- nvmf/common.sh@7 -- # uname -s 00:13:28.391 17:24:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.391 17:24:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.391 17:24:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.391 17:24:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.391 17:24:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.391 17:24:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.391 17:24:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.391 17:24:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.391 17:24:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.391 17:24:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.391 17:24:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:28.391 17:24:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:28.391 17:24:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.391 17:24:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.391 17:24:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.391 17:24:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.391 17:24:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.391 17:24:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.391 17:24:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.391 17:24:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.391 17:24:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.391 17:24:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.391 17:24:07 -- paths/export.sh@5 -- # export PATH 00:13:28.391 17:24:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.391 17:24:07 -- nvmf/common.sh@46 -- # : 0 00:13:28.391 17:24:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:28.391 17:24:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:28.391 17:24:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:28.391 17:24:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.391 17:24:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.391 17:24:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:28.391 17:24:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:28.391 17:24:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:28.391 17:24:07 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.391 17:24:07 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:28.391 17:24:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:28.391 17:24:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.391 17:24:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:28.391 17:24:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:28.391 17:24:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:28.391 17:24:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.391 17:24:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.391 17:24:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.391 17:24:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:28.391 17:24:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:28.391 17:24:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:28.392 17:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:33.670 17:24:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:33.670 17:24:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:33.670 17:24:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:33.670 17:24:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:33.670 17:24:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:33.670 17:24:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:33.670 17:24:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:33.670 17:24:12 -- nvmf/common.sh@294 -- # net_devs=() 00:13:33.670 17:24:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:33.670 17:24:12 -- nvmf/common.sh@295 -- # e810=() 00:13:33.670 17:24:12 -- nvmf/common.sh@295 -- # local -ga e810 00:13:33.670 17:24:12 -- nvmf/common.sh@296 -- # x722=() 00:13:33.670 17:24:12 -- nvmf/common.sh@296 -- # local -ga x722 00:13:33.670 17:24:12 -- nvmf/common.sh@297 -- # mlx=() 00:13:33.670 17:24:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:33.670 17:24:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.670 17:24:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:33.670 17:24:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:33.670 17:24:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:33.670 17:24:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:33.670 17:24:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:33.670 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:33.670 17:24:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:33.670 17:24:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:33.670 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:33.670 17:24:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:33.670 17:24:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:33.670 17:24:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.670 17:24:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:33.670 17:24:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.670 17:24:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:33.670 Found net devices under 0000:af:00.0: cvl_0_0 00:13:33.670 17:24:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.670 17:24:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:33.670 17:24:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.670 17:24:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:33.670 17:24:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.670 17:24:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:33.670 Found net devices under 0000:af:00.1: cvl_0_1 00:13:33.670 17:24:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.670 17:24:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:33.670 17:24:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:33.670 17:24:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:33.670 17:24:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.670 17:24:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.670 17:24:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.670 17:24:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:33.670 17:24:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.670 17:24:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.670 17:24:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:33.670 17:24:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.670 17:24:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.670 17:24:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:33.670 17:24:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:33.670 17:24:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.670 17:24:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.670 17:24:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.670 17:24:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.670 17:24:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:33.670 17:24:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.670 17:24:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.670 17:24:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.670 17:24:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:33.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:13:33.670 00:13:33.670 --- 10.0.0.2 ping statistics --- 00:13:33.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.670 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:13:33.670 17:24:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:13:33.670 00:13:33.670 --- 10.0.0.1 ping statistics --- 00:13:33.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.670 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:13:33.670 17:24:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.670 17:24:12 -- nvmf/common.sh@410 -- # return 0 00:13:33.670 17:24:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:33.670 17:24:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.670 17:24:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:33.670 17:24:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.670 17:24:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:33.670 17:24:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:33.670 17:24:12 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:33.670 17:24:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:33.670 17:24:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:33.670 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:13:33.670 17:24:12 -- nvmf/common.sh@469 -- # nvmfpid=4036542 00:13:33.670 17:24:12 -- nvmf/common.sh@470 -- # waitforlisten 4036542 00:13:33.670 17:24:12 -- common/autotest_common.sh@819 -- # '[' -z 4036542 ']' 00:13:33.670 17:24:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.670 17:24:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:33.670 17:24:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.670 17:24:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:33.670 17:24:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:33.670 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:13:33.670 [2024-07-12 17:24:12.471832] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:33.670 [2024-07-12 17:24:12.471884] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.670 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.670 [2024-07-12 17:24:12.549018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:33.670 [2024-07-12 17:24:12.590973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:33.670 [2024-07-12 17:24:12.591117] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.670 [2024-07-12 17:24:12.591128] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.670 [2024-07-12 17:24:12.591137] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.670 [2024-07-12 17:24:12.591249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.670 [2024-07-12 17:24:12.591342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.670 [2024-07-12 17:24:12.591346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.606 17:24:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:34.606 17:24:13 -- common/autotest_common.sh@852 -- # return 0 00:13:34.606 17:24:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:34.606 17:24:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:34.606 17:24:13 -- common/autotest_common.sh@10 -- # set +x 00:13:34.606 17:24:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.606 17:24:13 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:34.606 17:24:13 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:34.865 [2024-07-12 17:24:13.664415] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.865 17:24:13 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:35.124 17:24:13 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.383 [2024-07-12 17:24:14.167292] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.383 17:24:14 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.641 17:24:14 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:35.899 Malloc0 00:13:35.899 17:24:14 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:36.156 Delay0 00:13:36.156 17:24:14 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.412 17:24:15 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:36.669 NULL1 00:13:36.669 17:24:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:36.926 17:24:15 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4037236 00:13:36.926 17:24:15 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:36.926 17:24:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:36.926 17:24:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.926 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.184 17:24:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.441 17:24:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:37.441 17:24:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:37.441 true 00:13:37.698 17:24:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:37.698 17:24:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.955 17:24:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.955 17:24:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:37.955 17:24:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:38.213 true 00:13:38.213 17:24:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:38.213 17:24:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.150 Read completed with error (sct=0, sc=11) 00:13:39.150 17:24:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.409 17:24:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:39.409 17:24:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:39.668 true 00:13:39.668 17:24:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:39.668 17:24:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.927 17:24:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.186 17:24:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:40.186 17:24:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:40.444 true 00:13:40.444 17:24:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:40.444 17:24:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.396 17:24:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.654 17:24:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:41.654 17:24:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:41.913 true 00:13:41.913 17:24:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:41.913 17:24:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.171 17:24:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.429 17:24:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:42.429 17:24:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:42.688 true 00:13:42.688 17:24:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:42.688 17:24:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.623 17:24:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.623 17:24:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:43.623 17:24:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:43.883 true 00:13:43.883 17:24:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:43.883 17:24:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.141 17:24:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.398 17:24:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:44.399 17:24:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:44.656 true 00:13:44.656 17:24:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:44.656 17:24:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.591 17:24:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.851 17:24:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:45.851 17:24:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:45.851 true 00:13:45.851 17:24:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:45.851 17:24:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.111 17:24:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.370 17:24:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:46.370 17:24:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:46.629 true 00:13:46.629 17:24:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:46.629 17:24:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.566 17:24:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.825 17:24:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:47.825 17:24:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:48.084 true 00:13:48.084 17:24:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:48.084 17:24:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.343 17:24:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.602 17:24:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:48.602 17:24:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:48.861 true 00:13:48.861 17:24:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:48.861 17:24:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.121 17:24:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.380 17:24:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:49.380 17:24:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:49.638 true 00:13:49.638 17:24:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:49.638 17:24:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.015 17:24:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.015 17:24:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:51.015 17:24:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:51.276 true 00:13:51.276 17:24:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:51.276 17:24:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.536 17:24:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.796 17:24:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:51.796 17:24:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:52.055 true 00:13:52.055 17:24:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:52.055 17:24:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.992 17:24:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.992 17:24:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:52.992 17:24:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:53.251 true 00:13:53.251 17:24:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:53.251 17:24:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.510 17:24:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.769 17:24:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:53.769 17:24:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:54.027 true 00:13:54.027 17:24:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:54.027 17:24:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.995 17:24:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.265 17:24:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:55.265 17:24:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:55.265 true 00:13:55.265 17:24:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:55.265 17:24:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.558 17:24:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.816 17:24:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:55.816 17:24:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:56.074 true 00:13:56.074 17:24:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:56.074 17:24:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.010 17:24:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.269 17:24:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:57.269 17:24:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:57.528 true 00:13:57.528 17:24:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:57.528 17:24:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.095 17:24:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.354 17:24:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:58.354 17:24:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:58.614 true 00:13:58.614 17:24:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:58.614 17:24:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.873 17:24:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.132 17:24:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:59.132 17:24:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:59.396 true 00:13:59.396 17:24:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:13:59.396 17:24:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.655 17:24:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.915 17:24:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:59.915 17:24:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:00.173 true 00:14:00.173 17:24:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:14:00.173 17:24:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.548 17:24:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.548 17:24:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:01.548 17:24:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:01.548 true 00:14:01.807 17:24:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:14:01.807 17:24:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.807 17:24:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.066 17:24:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:02.066 17:24:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:02.325 true 00:14:02.325 17:24:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:14:02.325 17:24:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.262 17:24:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.521 17:24:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:03.521 17:24:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:03.780 true 00:14:03.780 17:24:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:14:03.780 17:24:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.039 17:24:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.296 17:24:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:04.296 17:24:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:04.553 true 00:14:04.553 17:24:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:14:04.553 17:24:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.489 17:24:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.747 17:24:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:05.747 17:24:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:06.006 true 00:14:06.006 17:24:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:14:06.006 17:24:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.264 17:24:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.523 17:24:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:06.524 17:24:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:06.524 true 00:14:06.782 17:24:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:14:06.782 17:24:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.351 Initializing NVMe Controllers 00:14:07.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:07.351 Controller IO queue size 128, less than required. 00:14:07.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.351 Controller IO queue size 128, less than required. 00:14:07.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:07.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:07.351 Initialization complete. Launching workers. 00:14:07.351 ======================================================== 00:14:07.351 Latency(us) 00:14:07.351 Device Information : IOPS MiB/s Average min max 00:14:07.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 485.86 0.24 127868.55 3221.37 1169136.48 00:14:07.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13917.47 6.80 9162.56 1818.75 455408.59 00:14:07.351 ======================================================== 00:14:07.351 Total : 14403.33 7.03 13166.79 1818.75 1169136.48 00:14:07.351 00:14:07.610 17:24:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.869 17:24:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:07.869 17:24:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:07.869 true 00:14:08.127 17:24:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4037236 00:14:08.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4037236) - No such process 00:14:08.127 17:24:46 -- target/ns_hotplug_stress.sh@53 -- # wait 4037236 00:14:08.127 17:24:46 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.384 17:24:47 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.384 17:24:47 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:08.384 17:24:47 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:08.384 17:24:47 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:08.384 17:24:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.384 17:24:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:08.642 null0 00:14:08.642 17:24:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:08.642 17:24:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.642 17:24:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:08.900 null1 00:14:08.900 17:24:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:08.900 17:24:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.900 17:24:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:09.158 null2 00:14:09.158 17:24:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.158 17:24:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.158 17:24:48 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:09.415 null3 00:14:09.415 17:24:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.415 17:24:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.416 17:24:48 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:09.674 null4 00:14:09.674 17:24:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.674 17:24:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.674 17:24:48 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:09.955 null5 00:14:09.955 17:24:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.955 17:24:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.955 17:24:48 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:10.214 null6 00:14:10.214 17:24:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.214 17:24:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.214 17:24:49 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:10.472 null7 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@66 -- # wait 4043534 4043537 4043540 4043543 4043546 4043549 4043551 4043553 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.472 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.730 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.730 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.730 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.730 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.730 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.730 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.730 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.730 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.988 17:24:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.247 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.506 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.765 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:12.023 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.282 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.282 17:24:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:12.282 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.283 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.542 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.807 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:13.066 17:24:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.324 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:13.583 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:13.842 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:14.100 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.100 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:14.100 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.100 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.100 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:14.100 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:14.100 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.100 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.100 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.358 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.617 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:14.617 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.617 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.618 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.875 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:15.133 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:15.133 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.133 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.133 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.133 17:24:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.133 17:24:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:15.133 17:24:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:15.133 17:24:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:15.133 17:24:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:15.133 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.133 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.391 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.648 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.648 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.648 17:24:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.907 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.907 17:24:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.907 17:24:54 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:15.907 17:24:54 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:15.907 17:24:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:15.907 17:24:54 -- nvmf/common.sh@116 -- # sync 00:14:15.907 17:24:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:15.907 17:24:54 -- nvmf/common.sh@119 -- # set +e 00:14:15.907 17:24:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:15.907 17:24:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:15.907 rmmod nvme_tcp 00:14:15.907 rmmod nvme_fabrics 00:14:15.907 rmmod nvme_keyring 00:14:15.907 17:24:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:15.907 17:24:54 -- nvmf/common.sh@123 -- # set -e 00:14:15.907 17:24:54 -- nvmf/common.sh@124 -- # return 0 00:14:15.907 17:24:54 -- nvmf/common.sh@477 -- # '[' -n 4036542 ']' 00:14:15.907 17:24:54 -- nvmf/common.sh@478 -- # killprocess 4036542 00:14:15.907 17:24:54 -- common/autotest_common.sh@926 -- # '[' -z 4036542 ']' 00:14:15.907 17:24:54 -- common/autotest_common.sh@930 -- # kill -0 4036542 00:14:15.907 17:24:54 -- common/autotest_common.sh@931 -- # uname 00:14:15.907 17:24:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:15.907 17:24:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4036542 00:14:15.907 17:24:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:15.907 17:24:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:15.907 17:24:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4036542' 00:14:15.907 killing process with pid 4036542 00:14:15.907 17:24:54 -- common/autotest_common.sh@945 -- # kill 4036542 00:14:15.907 17:24:54 -- common/autotest_common.sh@950 -- # wait 4036542 00:14:16.166 17:24:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:16.166 17:24:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:16.166 17:24:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:16.166 17:24:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.166 17:24:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:16.166 17:24:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.166 17:24:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.166 17:24:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.702 17:24:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:18.702 00:14:18.702 real 0m50.049s 00:14:18.702 user 3m33.412s 00:14:18.702 sys 0m15.193s 00:14:18.702 17:24:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.702 17:24:57 -- common/autotest_common.sh@10 -- # set +x 00:14:18.702 ************************************ 00:14:18.702 END TEST nvmf_ns_hotplug_stress 00:14:18.702 ************************************ 00:14:18.702 17:24:57 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:18.702 17:24:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:18.702 17:24:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:18.702 17:24:57 -- common/autotest_common.sh@10 -- # set +x 00:14:18.702 ************************************ 00:14:18.702 START TEST nvmf_connect_stress 00:14:18.702 ************************************ 00:14:18.702 17:24:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:18.702 * Looking for test storage... 00:14:18.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.702 17:24:57 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.702 17:24:57 -- nvmf/common.sh@7 -- # uname -s 00:14:18.702 17:24:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.702 17:24:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.702 17:24:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.702 17:24:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.702 17:24:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.702 17:24:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.702 17:24:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.702 17:24:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.702 17:24:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.702 17:24:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.702 17:24:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:18.702 17:24:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:18.702 17:24:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.702 17:24:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.702 17:24:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.702 17:24:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.702 17:24:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.702 17:24:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.702 17:24:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.702 17:24:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.702 17:24:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.702 17:24:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.702 17:24:57 -- paths/export.sh@5 -- # export PATH 00:14:18.702 17:24:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.702 17:24:57 -- nvmf/common.sh@46 -- # : 0 00:14:18.702 17:24:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:18.702 17:24:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:18.702 17:24:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:18.702 17:24:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.702 17:24:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.703 17:24:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:18.703 17:24:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:18.703 17:24:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:18.703 17:24:57 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:18.703 17:24:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:18.703 17:24:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.703 17:24:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:18.703 17:24:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:18.703 17:24:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:18.703 17:24:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.703 17:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.703 17:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.703 17:24:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:18.703 17:24:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:18.703 17:24:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:18.703 17:24:57 -- common/autotest_common.sh@10 -- # set +x 00:14:23.979 17:25:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:23.979 17:25:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:23.979 17:25:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:23.979 17:25:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:23.979 17:25:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:23.979 17:25:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:23.979 17:25:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:23.979 17:25:02 -- nvmf/common.sh@294 -- # net_devs=() 00:14:23.979 17:25:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:23.979 17:25:02 -- nvmf/common.sh@295 -- # e810=() 00:14:23.979 17:25:02 -- nvmf/common.sh@295 -- # local -ga e810 00:14:23.979 17:25:02 -- nvmf/common.sh@296 -- # x722=() 00:14:23.979 17:25:02 -- nvmf/common.sh@296 -- # local -ga x722 00:14:23.979 17:25:02 -- nvmf/common.sh@297 -- # mlx=() 00:14:23.979 17:25:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:23.979 17:25:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.979 17:25:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:23.979 17:25:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:23.979 17:25:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:23.979 17:25:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:23.979 17:25:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:23.979 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:23.979 17:25:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:23.979 17:25:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:23.979 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:23.979 17:25:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:23.979 17:25:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:23.979 17:25:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.979 17:25:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:23.979 17:25:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.979 17:25:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:23.979 Found net devices under 0000:af:00.0: cvl_0_0 00:14:23.979 17:25:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.979 17:25:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:23.979 17:25:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.979 17:25:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:23.979 17:25:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.979 17:25:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:23.979 Found net devices under 0000:af:00.1: cvl_0_1 00:14:23.979 17:25:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.979 17:25:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:23.979 17:25:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:23.979 17:25:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:23.979 17:25:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:23.979 17:25:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.979 17:25:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.979 17:25:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.979 17:25:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:23.979 17:25:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.979 17:25:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.979 17:25:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:23.979 17:25:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.979 17:25:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.979 17:25:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:23.979 17:25:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:23.979 17:25:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.979 17:25:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.979 17:25:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.979 17:25:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.979 17:25:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:23.979 17:25:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.979 17:25:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.979 17:25:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.979 17:25:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:23.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:14:23.979 00:14:23.979 --- 10.0.0.2 ping statistics --- 00:14:23.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.979 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:14:23.979 17:25:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:14:23.979 00:14:23.979 --- 10.0.0.1 ping statistics --- 00:14:23.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.979 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:14:23.980 17:25:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.980 17:25:02 -- nvmf/common.sh@410 -- # return 0 00:14:23.980 17:25:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:23.980 17:25:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.980 17:25:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:23.980 17:25:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:23.980 17:25:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.980 17:25:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:23.980 17:25:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:23.980 17:25:02 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:23.980 17:25:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:23.980 17:25:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:23.980 17:25:02 -- common/autotest_common.sh@10 -- # set +x 00:14:23.980 17:25:02 -- nvmf/common.sh@469 -- # nvmfpid=4048262 00:14:23.980 17:25:02 -- nvmf/common.sh@470 -- # waitforlisten 4048262 00:14:23.980 17:25:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:23.980 17:25:02 -- common/autotest_common.sh@819 -- # '[' -z 4048262 ']' 00:14:23.980 17:25:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.980 17:25:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:23.980 17:25:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.980 17:25:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:23.980 17:25:02 -- common/autotest_common.sh@10 -- # set +x 00:14:24.240 [2024-07-12 17:25:02.969898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:24.240 [2024-07-12 17:25:02.969956] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.240 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.240 [2024-07-12 17:25:03.046954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.240 [2024-07-12 17:25:03.088768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:24.240 [2024-07-12 17:25:03.088912] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.240 [2024-07-12 17:25:03.088922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.240 [2024-07-12 17:25:03.088932] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.240 [2024-07-12 17:25:03.089043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.240 [2024-07-12 17:25:03.089132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.240 [2024-07-12 17:25:03.089134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.177 17:25:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:25.177 17:25:03 -- common/autotest_common.sh@852 -- # return 0 00:14:25.177 17:25:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:25.177 17:25:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:25.177 17:25:03 -- common/autotest_common.sh@10 -- # set +x 00:14:25.177 17:25:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.177 17:25:03 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.177 17:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.177 17:25:03 -- common/autotest_common.sh@10 -- # set +x 00:14:25.177 [2024-07-12 17:25:03.937526] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.177 17:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.177 17:25:03 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.177 17:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.177 17:25:03 -- common/autotest_common.sh@10 -- # set +x 00:14:25.177 17:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.177 17:25:03 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.177 17:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.177 17:25:03 -- common/autotest_common.sh@10 -- # set +x 00:14:25.177 [2024-07-12 17:25:03.975382] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.177 17:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.177 17:25:03 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.177 17:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.177 17:25:03 -- common/autotest_common.sh@10 -- # set +x 00:14:25.177 NULL1 00:14:25.177 17:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.177 17:25:03 -- target/connect_stress.sh@21 -- # PERF_PID=4048497 00:14:25.177 17:25:03 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.177 17:25:03 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:25.177 17:25:03 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.177 17:25:03 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:25.177 17:25:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:03 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.177 17:25:04 -- target/connect_stress.sh@28 -- # cat 00:14:25.177 17:25:04 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:25.177 17:25:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.177 17:25:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.177 17:25:04 -- common/autotest_common.sh@10 -- # set +x 00:14:25.746 17:25:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.746 17:25:04 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:25.746 17:25:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.746 17:25:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.746 17:25:04 -- common/autotest_common.sh@10 -- # set +x 00:14:26.005 17:25:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.005 17:25:04 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:26.005 17:25:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.005 17:25:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.005 17:25:04 -- common/autotest_common.sh@10 -- # set +x 00:14:26.264 17:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.264 17:25:05 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:26.264 17:25:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.264 17:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.264 17:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:26.522 17:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.522 17:25:05 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:26.522 17:25:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.522 17:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.522 17:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:26.782 17:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.782 17:25:05 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:26.782 17:25:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.782 17:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.782 17:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:27.351 17:25:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.351 17:25:06 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:27.351 17:25:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.351 17:25:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.351 17:25:06 -- common/autotest_common.sh@10 -- # set +x 00:14:27.629 17:25:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.629 17:25:06 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:27.629 17:25:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.629 17:25:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.629 17:25:06 -- common/autotest_common.sh@10 -- # set +x 00:14:27.902 17:25:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.902 17:25:06 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:27.902 17:25:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.902 17:25:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.902 17:25:06 -- common/autotest_common.sh@10 -- # set +x 00:14:28.167 17:25:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.167 17:25:06 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:28.167 17:25:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.167 17:25:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.167 17:25:06 -- common/autotest_common.sh@10 -- # set +x 00:14:28.426 17:25:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.426 17:25:07 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:28.426 17:25:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.426 17:25:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.426 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:14:28.685 17:25:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.685 17:25:07 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:28.685 17:25:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.685 17:25:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.685 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:14:29.252 17:25:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.252 17:25:07 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:29.252 17:25:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.252 17:25:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.252 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:14:29.511 17:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.511 17:25:08 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:29.511 17:25:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.511 17:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.511 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:14:29.770 17:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.770 17:25:08 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:29.770 17:25:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.770 17:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.770 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:14:30.028 17:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.028 17:25:08 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:30.028 17:25:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.028 17:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.028 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:14:30.287 17:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.287 17:25:09 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:30.287 17:25:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.287 17:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.287 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:14:30.854 17:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.854 17:25:09 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:30.854 17:25:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.854 17:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.854 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:14:31.113 17:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.113 17:25:09 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:31.113 17:25:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.113 17:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.113 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:14:31.372 17:25:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.372 17:25:10 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:31.372 17:25:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.372 17:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.372 17:25:10 -- common/autotest_common.sh@10 -- # set +x 00:14:31.631 17:25:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.631 17:25:10 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:31.631 17:25:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.631 17:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.631 17:25:10 -- common/autotest_common.sh@10 -- # set +x 00:14:31.890 17:25:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.890 17:25:10 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:31.891 17:25:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.891 17:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.891 17:25:10 -- common/autotest_common.sh@10 -- # set +x 00:14:32.459 17:25:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.459 17:25:11 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:32.459 17:25:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.459 17:25:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.459 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:14:32.718 17:25:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.718 17:25:11 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:32.718 17:25:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.718 17:25:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.718 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:14:32.976 17:25:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.976 17:25:11 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:32.976 17:25:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.976 17:25:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.976 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:14:33.235 17:25:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.235 17:25:12 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:33.235 17:25:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.235 17:25:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.235 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:14:33.801 17:25:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.802 17:25:12 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:33.802 17:25:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.802 17:25:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.802 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:14:34.060 17:25:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.060 17:25:12 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:34.060 17:25:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.060 17:25:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.060 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:14:34.318 17:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.318 17:25:13 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:34.318 17:25:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.318 17:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.318 17:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:34.576 17:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.576 17:25:13 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:34.576 17:25:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.576 17:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.576 17:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:34.835 17:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.835 17:25:13 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:34.835 17:25:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.835 17:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.835 17:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:35.401 17:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.401 17:25:14 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:35.401 17:25:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.401 17:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.401 17:25:14 -- common/autotest_common.sh@10 -- # set +x 00:14:35.401 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.660 17:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.660 17:25:14 -- target/connect_stress.sh@34 -- # kill -0 4048497 00:14:35.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4048497) - No such process 00:14:35.660 17:25:14 -- target/connect_stress.sh@38 -- # wait 4048497 00:14:35.660 17:25:14 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:35.660 17:25:14 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:35.660 17:25:14 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:35.660 17:25:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:35.660 17:25:14 -- nvmf/common.sh@116 -- # sync 00:14:35.660 17:25:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:35.660 17:25:14 -- nvmf/common.sh@119 -- # set +e 00:14:35.660 17:25:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:35.660 17:25:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:35.660 rmmod nvme_tcp 00:14:35.660 rmmod nvme_fabrics 00:14:35.660 rmmod nvme_keyring 00:14:35.660 17:25:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:35.660 17:25:14 -- nvmf/common.sh@123 -- # set -e 00:14:35.660 17:25:14 -- nvmf/common.sh@124 -- # return 0 00:14:35.660 17:25:14 -- nvmf/common.sh@477 -- # '[' -n 4048262 ']' 00:14:35.660 17:25:14 -- nvmf/common.sh@478 -- # killprocess 4048262 00:14:35.660 17:25:14 -- common/autotest_common.sh@926 -- # '[' -z 4048262 ']' 00:14:35.660 17:25:14 -- common/autotest_common.sh@930 -- # kill -0 4048262 00:14:35.660 17:25:14 -- common/autotest_common.sh@931 -- # uname 00:14:35.660 17:25:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:35.660 17:25:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4048262 00:14:35.660 17:25:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:35.660 17:25:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:35.660 17:25:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4048262' 00:14:35.660 killing process with pid 4048262 00:14:35.660 17:25:14 -- common/autotest_common.sh@945 -- # kill 4048262 00:14:35.660 17:25:14 -- common/autotest_common.sh@950 -- # wait 4048262 00:14:35.919 17:25:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:35.919 17:25:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:35.919 17:25:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:35.919 17:25:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.919 17:25:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:35.919 17:25:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.919 17:25:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.919 17:25:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.832 17:25:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:37.832 00:14:37.832 real 0m19.618s 00:14:37.832 user 0m42.555s 00:14:37.832 sys 0m7.937s 00:14:37.832 17:25:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.832 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:37.832 ************************************ 00:14:37.832 END TEST nvmf_connect_stress 00:14:37.832 ************************************ 00:14:38.090 17:25:16 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:38.090 17:25:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:38.090 17:25:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:38.090 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:38.090 ************************************ 00:14:38.090 START TEST nvmf_fused_ordering 00:14:38.090 ************************************ 00:14:38.090 17:25:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:38.090 * Looking for test storage... 00:14:38.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.090 17:25:16 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.090 17:25:16 -- nvmf/common.sh@7 -- # uname -s 00:14:38.090 17:25:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.090 17:25:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.090 17:25:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.090 17:25:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.090 17:25:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.090 17:25:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.090 17:25:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.090 17:25:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.090 17:25:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.090 17:25:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.090 17:25:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:38.090 17:25:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:38.090 17:25:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.090 17:25:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.090 17:25:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.090 17:25:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.090 17:25:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.090 17:25:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.090 17:25:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.090 17:25:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.090 17:25:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.090 17:25:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.091 17:25:16 -- paths/export.sh@5 -- # export PATH 00:14:38.091 17:25:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.091 17:25:16 -- nvmf/common.sh@46 -- # : 0 00:14:38.091 17:25:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:38.091 17:25:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:38.091 17:25:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:38.091 17:25:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.091 17:25:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.091 17:25:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:38.091 17:25:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:38.091 17:25:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:38.091 17:25:16 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:38.091 17:25:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:38.091 17:25:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.091 17:25:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:38.091 17:25:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:38.091 17:25:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:38.091 17:25:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.091 17:25:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.091 17:25:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.091 17:25:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:38.091 17:25:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:38.091 17:25:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:38.091 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:44.659 17:25:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:44.659 17:25:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:44.659 17:25:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:44.659 17:25:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:44.659 17:25:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:44.659 17:25:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:44.659 17:25:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:44.659 17:25:22 -- nvmf/common.sh@294 -- # net_devs=() 00:14:44.659 17:25:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:44.659 17:25:22 -- nvmf/common.sh@295 -- # e810=() 00:14:44.659 17:25:22 -- nvmf/common.sh@295 -- # local -ga e810 00:14:44.659 17:25:22 -- nvmf/common.sh@296 -- # x722=() 00:14:44.659 17:25:22 -- nvmf/common.sh@296 -- # local -ga x722 00:14:44.659 17:25:22 -- nvmf/common.sh@297 -- # mlx=() 00:14:44.659 17:25:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:44.659 17:25:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.659 17:25:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:44.659 17:25:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:44.659 17:25:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:44.659 17:25:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:44.659 17:25:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:44.659 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:44.659 17:25:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:44.659 17:25:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:44.659 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:44.659 17:25:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:44.659 17:25:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:44.659 17:25:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.659 17:25:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:44.659 17:25:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.659 17:25:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:44.659 Found net devices under 0000:af:00.0: cvl_0_0 00:14:44.659 17:25:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.659 17:25:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:44.659 17:25:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.659 17:25:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:44.659 17:25:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.659 17:25:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:44.659 Found net devices under 0000:af:00.1: cvl_0_1 00:14:44.659 17:25:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.659 17:25:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:44.659 17:25:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:44.659 17:25:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:44.659 17:25:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.659 17:25:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.659 17:25:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.659 17:25:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:44.659 17:25:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.659 17:25:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.659 17:25:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:44.659 17:25:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.659 17:25:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.659 17:25:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:44.659 17:25:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:44.659 17:25:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.659 17:25:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.659 17:25:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.659 17:25:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.659 17:25:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:44.659 17:25:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.659 17:25:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.659 17:25:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.659 17:25:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:44.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:14:44.659 00:14:44.659 --- 10.0.0.2 ping statistics --- 00:14:44.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.659 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:44.659 17:25:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:14:44.659 00:14:44.659 --- 10.0.0.1 ping statistics --- 00:14:44.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.659 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:44.659 17:25:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.659 17:25:22 -- nvmf/common.sh@410 -- # return 0 00:14:44.659 17:25:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:44.659 17:25:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.659 17:25:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:44.659 17:25:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.659 17:25:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:44.659 17:25:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:44.659 17:25:22 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:44.659 17:25:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:44.660 17:25:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:44.660 17:25:22 -- common/autotest_common.sh@10 -- # set +x 00:14:44.660 17:25:22 -- nvmf/common.sh@469 -- # nvmfpid=4053917 00:14:44.660 17:25:22 -- nvmf/common.sh@470 -- # waitforlisten 4053917 00:14:44.660 17:25:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:44.660 17:25:22 -- common/autotest_common.sh@819 -- # '[' -z 4053917 ']' 00:14:44.660 17:25:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.660 17:25:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:44.660 17:25:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.660 17:25:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:44.660 17:25:22 -- common/autotest_common.sh@10 -- # set +x 00:14:44.660 [2024-07-12 17:25:22.811100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.660 [2024-07-12 17:25:22.811153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.660 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.660 [2024-07-12 17:25:22.889470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.660 [2024-07-12 17:25:22.931602] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:44.660 [2024-07-12 17:25:22.931752] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.660 [2024-07-12 17:25:22.931762] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.660 [2024-07-12 17:25:22.931771] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.660 [2024-07-12 17:25:22.931799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.917 17:25:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:44.917 17:25:23 -- common/autotest_common.sh@852 -- # return 0 00:14:44.917 17:25:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:44.917 17:25:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:44.917 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.917 17:25:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.917 17:25:23 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.917 17:25:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.917 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.917 [2024-07-12 17:25:23.778602] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.917 17:25:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.917 17:25:23 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:44.917 17:25:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.917 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.917 17:25:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.917 17:25:23 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.917 17:25:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.917 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.918 [2024-07-12 17:25:23.794781] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.918 17:25:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.918 17:25:23 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:44.918 17:25:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.918 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.918 NULL1 00:14:44.918 17:25:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.918 17:25:23 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:44.918 17:25:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.918 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.918 17:25:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.918 17:25:23 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:44.918 17:25:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.918 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.918 17:25:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.918 17:25:23 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:44.918 [2024-07-12 17:25:23.850736] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.918 [2024-07-12 17:25:23.850803] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054197 ] 00:14:45.175 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.433 Attached to nqn.2016-06.io.spdk:cnode1 00:14:45.433 Namespace ID: 1 size: 1GB 00:14:45.433 fused_ordering(0) 00:14:45.433 fused_ordering(1) 00:14:45.433 fused_ordering(2) 00:14:45.433 fused_ordering(3) 00:14:45.433 fused_ordering(4) 00:14:45.433 fused_ordering(5) 00:14:45.433 fused_ordering(6) 00:14:45.433 fused_ordering(7) 00:14:45.433 fused_ordering(8) 00:14:45.433 fused_ordering(9) 00:14:45.433 fused_ordering(10) 00:14:45.433 fused_ordering(11) 00:14:45.433 fused_ordering(12) 00:14:45.433 fused_ordering(13) 00:14:45.433 fused_ordering(14) 00:14:45.433 fused_ordering(15) 00:14:45.433 fused_ordering(16) 00:14:45.433 fused_ordering(17) 00:14:45.433 fused_ordering(18) 00:14:45.433 fused_ordering(19) 00:14:45.433 fused_ordering(20) 00:14:45.433 fused_ordering(21) 00:14:45.433 fused_ordering(22) 00:14:45.433 fused_ordering(23) 00:14:45.433 fused_ordering(24) 00:14:45.433 fused_ordering(25) 00:14:45.433 fused_ordering(26) 00:14:45.433 fused_ordering(27) 00:14:45.433 fused_ordering(28) 00:14:45.433 fused_ordering(29) 00:14:45.433 fused_ordering(30) 00:14:45.433 fused_ordering(31) 00:14:45.433 fused_ordering(32) 00:14:45.433 fused_ordering(33) 00:14:45.433 fused_ordering(34) 00:14:45.433 fused_ordering(35) 00:14:45.433 fused_ordering(36) 00:14:45.433 fused_ordering(37) 00:14:45.433 fused_ordering(38) 00:14:45.433 fused_ordering(39) 00:14:45.433 fused_ordering(40) 00:14:45.433 fused_ordering(41) 00:14:45.433 fused_ordering(42) 00:14:45.433 fused_ordering(43) 00:14:45.433 fused_ordering(44) 00:14:45.433 fused_ordering(45) 00:14:45.433 fused_ordering(46) 00:14:45.433 fused_ordering(47) 00:14:45.433 fused_ordering(48) 00:14:45.433 fused_ordering(49) 00:14:45.433 fused_ordering(50) 00:14:45.433 fused_ordering(51) 00:14:45.433 fused_ordering(52) 00:14:45.433 fused_ordering(53) 00:14:45.433 fused_ordering(54) 00:14:45.433 fused_ordering(55) 00:14:45.433 fused_ordering(56) 00:14:45.433 fused_ordering(57) 00:14:45.433 fused_ordering(58) 00:14:45.433 fused_ordering(59) 00:14:45.433 fused_ordering(60) 00:14:45.433 fused_ordering(61) 00:14:45.433 fused_ordering(62) 00:14:45.433 fused_ordering(63) 00:14:45.433 fused_ordering(64) 00:14:45.433 fused_ordering(65) 00:14:45.433 fused_ordering(66) 00:14:45.433 fused_ordering(67) 00:14:45.434 fused_ordering(68) 00:14:45.434 fused_ordering(69) 00:14:45.434 fused_ordering(70) 00:14:45.434 fused_ordering(71) 00:14:45.434 fused_ordering(72) 00:14:45.434 fused_ordering(73) 00:14:45.434 fused_ordering(74) 00:14:45.434 fused_ordering(75) 00:14:45.434 fused_ordering(76) 00:14:45.434 fused_ordering(77) 00:14:45.434 fused_ordering(78) 00:14:45.434 fused_ordering(79) 00:14:45.434 fused_ordering(80) 00:14:45.434 fused_ordering(81) 00:14:45.434 fused_ordering(82) 00:14:45.434 fused_ordering(83) 00:14:45.434 fused_ordering(84) 00:14:45.434 fused_ordering(85) 00:14:45.434 fused_ordering(86) 00:14:45.434 fused_ordering(87) 00:14:45.434 fused_ordering(88) 00:14:45.434 fused_ordering(89) 00:14:45.434 fused_ordering(90) 00:14:45.434 fused_ordering(91) 00:14:45.434 fused_ordering(92) 00:14:45.434 fused_ordering(93) 00:14:45.434 fused_ordering(94) 00:14:45.434 fused_ordering(95) 00:14:45.434 fused_ordering(96) 00:14:45.434 fused_ordering(97) 00:14:45.434 fused_ordering(98) 00:14:45.434 fused_ordering(99) 00:14:45.434 fused_ordering(100) 00:14:45.434 fused_ordering(101) 00:14:45.434 fused_ordering(102) 00:14:45.434 fused_ordering(103) 00:14:45.434 fused_ordering(104) 00:14:45.434 fused_ordering(105) 00:14:45.434 fused_ordering(106) 00:14:45.434 fused_ordering(107) 00:14:45.434 fused_ordering(108) 00:14:45.434 fused_ordering(109) 00:14:45.434 fused_ordering(110) 00:14:45.434 fused_ordering(111) 00:14:45.434 fused_ordering(112) 00:14:45.434 fused_ordering(113) 00:14:45.434 fused_ordering(114) 00:14:45.434 fused_ordering(115) 00:14:45.434 fused_ordering(116) 00:14:45.434 fused_ordering(117) 00:14:45.434 fused_ordering(118) 00:14:45.434 fused_ordering(119) 00:14:45.434 fused_ordering(120) 00:14:45.434 fused_ordering(121) 00:14:45.434 fused_ordering(122) 00:14:45.434 fused_ordering(123) 00:14:45.434 fused_ordering(124) 00:14:45.434 fused_ordering(125) 00:14:45.434 fused_ordering(126) 00:14:45.434 fused_ordering(127) 00:14:45.434 fused_ordering(128) 00:14:45.434 fused_ordering(129) 00:14:45.434 fused_ordering(130) 00:14:45.434 fused_ordering(131) 00:14:45.434 fused_ordering(132) 00:14:45.434 fused_ordering(133) 00:14:45.434 fused_ordering(134) 00:14:45.434 fused_ordering(135) 00:14:45.434 fused_ordering(136) 00:14:45.434 fused_ordering(137) 00:14:45.434 fused_ordering(138) 00:14:45.434 fused_ordering(139) 00:14:45.434 fused_ordering(140) 00:14:45.434 fused_ordering(141) 00:14:45.434 fused_ordering(142) 00:14:45.434 fused_ordering(143) 00:14:45.434 fused_ordering(144) 00:14:45.434 fused_ordering(145) 00:14:45.434 fused_ordering(146) 00:14:45.434 fused_ordering(147) 00:14:45.434 fused_ordering(148) 00:14:45.434 fused_ordering(149) 00:14:45.434 fused_ordering(150) 00:14:45.434 fused_ordering(151) 00:14:45.434 fused_ordering(152) 00:14:45.434 fused_ordering(153) 00:14:45.434 fused_ordering(154) 00:14:45.434 fused_ordering(155) 00:14:45.434 fused_ordering(156) 00:14:45.434 fused_ordering(157) 00:14:45.434 fused_ordering(158) 00:14:45.434 fused_ordering(159) 00:14:45.434 fused_ordering(160) 00:14:45.434 fused_ordering(161) 00:14:45.434 fused_ordering(162) 00:14:45.434 fused_ordering(163) 00:14:45.434 fused_ordering(164) 00:14:45.434 fused_ordering(165) 00:14:45.434 fused_ordering(166) 00:14:45.434 fused_ordering(167) 00:14:45.434 fused_ordering(168) 00:14:45.434 fused_ordering(169) 00:14:45.434 fused_ordering(170) 00:14:45.434 fused_ordering(171) 00:14:45.434 fused_ordering(172) 00:14:45.434 fused_ordering(173) 00:14:45.434 fused_ordering(174) 00:14:45.434 fused_ordering(175) 00:14:45.434 fused_ordering(176) 00:14:45.434 fused_ordering(177) 00:14:45.434 fused_ordering(178) 00:14:45.434 fused_ordering(179) 00:14:45.434 fused_ordering(180) 00:14:45.434 fused_ordering(181) 00:14:45.434 fused_ordering(182) 00:14:45.434 fused_ordering(183) 00:14:45.434 fused_ordering(184) 00:14:45.434 fused_ordering(185) 00:14:45.434 fused_ordering(186) 00:14:45.434 fused_ordering(187) 00:14:45.434 fused_ordering(188) 00:14:45.434 fused_ordering(189) 00:14:45.434 fused_ordering(190) 00:14:45.434 fused_ordering(191) 00:14:45.434 fused_ordering(192) 00:14:45.434 fused_ordering(193) 00:14:45.434 fused_ordering(194) 00:14:45.434 fused_ordering(195) 00:14:45.434 fused_ordering(196) 00:14:45.434 fused_ordering(197) 00:14:45.434 fused_ordering(198) 00:14:45.434 fused_ordering(199) 00:14:45.434 fused_ordering(200) 00:14:45.434 fused_ordering(201) 00:14:45.434 fused_ordering(202) 00:14:45.434 fused_ordering(203) 00:14:45.434 fused_ordering(204) 00:14:45.434 fused_ordering(205) 00:14:46.001 fused_ordering(206) 00:14:46.001 fused_ordering(207) 00:14:46.001 fused_ordering(208) 00:14:46.001 fused_ordering(209) 00:14:46.001 fused_ordering(210) 00:14:46.001 fused_ordering(211) 00:14:46.001 fused_ordering(212) 00:14:46.001 fused_ordering(213) 00:14:46.001 fused_ordering(214) 00:14:46.001 fused_ordering(215) 00:14:46.001 fused_ordering(216) 00:14:46.001 fused_ordering(217) 00:14:46.001 fused_ordering(218) 00:14:46.001 fused_ordering(219) 00:14:46.001 fused_ordering(220) 00:14:46.001 fused_ordering(221) 00:14:46.001 fused_ordering(222) 00:14:46.001 fused_ordering(223) 00:14:46.001 fused_ordering(224) 00:14:46.001 fused_ordering(225) 00:14:46.001 fused_ordering(226) 00:14:46.001 fused_ordering(227) 00:14:46.001 fused_ordering(228) 00:14:46.001 fused_ordering(229) 00:14:46.001 fused_ordering(230) 00:14:46.001 fused_ordering(231) 00:14:46.001 fused_ordering(232) 00:14:46.001 fused_ordering(233) 00:14:46.001 fused_ordering(234) 00:14:46.001 fused_ordering(235) 00:14:46.001 fused_ordering(236) 00:14:46.001 fused_ordering(237) 00:14:46.001 fused_ordering(238) 00:14:46.001 fused_ordering(239) 00:14:46.001 fused_ordering(240) 00:14:46.001 fused_ordering(241) 00:14:46.001 fused_ordering(242) 00:14:46.001 fused_ordering(243) 00:14:46.001 fused_ordering(244) 00:14:46.001 fused_ordering(245) 00:14:46.001 fused_ordering(246) 00:14:46.001 fused_ordering(247) 00:14:46.001 fused_ordering(248) 00:14:46.001 fused_ordering(249) 00:14:46.001 fused_ordering(250) 00:14:46.001 fused_ordering(251) 00:14:46.001 fused_ordering(252) 00:14:46.001 fused_ordering(253) 00:14:46.001 fused_ordering(254) 00:14:46.001 fused_ordering(255) 00:14:46.001 fused_ordering(256) 00:14:46.001 fused_ordering(257) 00:14:46.001 fused_ordering(258) 00:14:46.001 fused_ordering(259) 00:14:46.001 fused_ordering(260) 00:14:46.001 fused_ordering(261) 00:14:46.001 fused_ordering(262) 00:14:46.001 fused_ordering(263) 00:14:46.001 fused_ordering(264) 00:14:46.001 fused_ordering(265) 00:14:46.001 fused_ordering(266) 00:14:46.001 fused_ordering(267) 00:14:46.001 fused_ordering(268) 00:14:46.001 fused_ordering(269) 00:14:46.001 fused_ordering(270) 00:14:46.001 fused_ordering(271) 00:14:46.001 fused_ordering(272) 00:14:46.001 fused_ordering(273) 00:14:46.001 fused_ordering(274) 00:14:46.001 fused_ordering(275) 00:14:46.001 fused_ordering(276) 00:14:46.001 fused_ordering(277) 00:14:46.001 fused_ordering(278) 00:14:46.001 fused_ordering(279) 00:14:46.001 fused_ordering(280) 00:14:46.001 fused_ordering(281) 00:14:46.001 fused_ordering(282) 00:14:46.001 fused_ordering(283) 00:14:46.001 fused_ordering(284) 00:14:46.001 fused_ordering(285) 00:14:46.001 fused_ordering(286) 00:14:46.001 fused_ordering(287) 00:14:46.001 fused_ordering(288) 00:14:46.001 fused_ordering(289) 00:14:46.001 fused_ordering(290) 00:14:46.001 fused_ordering(291) 00:14:46.001 fused_ordering(292) 00:14:46.001 fused_ordering(293) 00:14:46.001 fused_ordering(294) 00:14:46.001 fused_ordering(295) 00:14:46.001 fused_ordering(296) 00:14:46.001 fused_ordering(297) 00:14:46.001 fused_ordering(298) 00:14:46.001 fused_ordering(299) 00:14:46.001 fused_ordering(300) 00:14:46.001 fused_ordering(301) 00:14:46.001 fused_ordering(302) 00:14:46.001 fused_ordering(303) 00:14:46.001 fused_ordering(304) 00:14:46.001 fused_ordering(305) 00:14:46.001 fused_ordering(306) 00:14:46.001 fused_ordering(307) 00:14:46.001 fused_ordering(308) 00:14:46.001 fused_ordering(309) 00:14:46.001 fused_ordering(310) 00:14:46.001 fused_ordering(311) 00:14:46.001 fused_ordering(312) 00:14:46.001 fused_ordering(313) 00:14:46.001 fused_ordering(314) 00:14:46.001 fused_ordering(315) 00:14:46.001 fused_ordering(316) 00:14:46.001 fused_ordering(317) 00:14:46.001 fused_ordering(318) 00:14:46.001 fused_ordering(319) 00:14:46.001 fused_ordering(320) 00:14:46.001 fused_ordering(321) 00:14:46.001 fused_ordering(322) 00:14:46.001 fused_ordering(323) 00:14:46.001 fused_ordering(324) 00:14:46.001 fused_ordering(325) 00:14:46.001 fused_ordering(326) 00:14:46.001 fused_ordering(327) 00:14:46.001 fused_ordering(328) 00:14:46.001 fused_ordering(329) 00:14:46.001 fused_ordering(330) 00:14:46.001 fused_ordering(331) 00:14:46.001 fused_ordering(332) 00:14:46.001 fused_ordering(333) 00:14:46.001 fused_ordering(334) 00:14:46.001 fused_ordering(335) 00:14:46.001 fused_ordering(336) 00:14:46.001 fused_ordering(337) 00:14:46.001 fused_ordering(338) 00:14:46.001 fused_ordering(339) 00:14:46.001 fused_ordering(340) 00:14:46.001 fused_ordering(341) 00:14:46.001 fused_ordering(342) 00:14:46.001 fused_ordering(343) 00:14:46.001 fused_ordering(344) 00:14:46.001 fused_ordering(345) 00:14:46.001 fused_ordering(346) 00:14:46.001 fused_ordering(347) 00:14:46.001 fused_ordering(348) 00:14:46.001 fused_ordering(349) 00:14:46.001 fused_ordering(350) 00:14:46.001 fused_ordering(351) 00:14:46.001 fused_ordering(352) 00:14:46.001 fused_ordering(353) 00:14:46.001 fused_ordering(354) 00:14:46.001 fused_ordering(355) 00:14:46.001 fused_ordering(356) 00:14:46.001 fused_ordering(357) 00:14:46.001 fused_ordering(358) 00:14:46.001 fused_ordering(359) 00:14:46.001 fused_ordering(360) 00:14:46.001 fused_ordering(361) 00:14:46.001 fused_ordering(362) 00:14:46.001 fused_ordering(363) 00:14:46.001 fused_ordering(364) 00:14:46.001 fused_ordering(365) 00:14:46.001 fused_ordering(366) 00:14:46.001 fused_ordering(367) 00:14:46.001 fused_ordering(368) 00:14:46.001 fused_ordering(369) 00:14:46.001 fused_ordering(370) 00:14:46.001 fused_ordering(371) 00:14:46.001 fused_ordering(372) 00:14:46.001 fused_ordering(373) 00:14:46.001 fused_ordering(374) 00:14:46.001 fused_ordering(375) 00:14:46.001 fused_ordering(376) 00:14:46.001 fused_ordering(377) 00:14:46.001 fused_ordering(378) 00:14:46.001 fused_ordering(379) 00:14:46.001 fused_ordering(380) 00:14:46.001 fused_ordering(381) 00:14:46.001 fused_ordering(382) 00:14:46.001 fused_ordering(383) 00:14:46.001 fused_ordering(384) 00:14:46.001 fused_ordering(385) 00:14:46.001 fused_ordering(386) 00:14:46.001 fused_ordering(387) 00:14:46.001 fused_ordering(388) 00:14:46.001 fused_ordering(389) 00:14:46.001 fused_ordering(390) 00:14:46.001 fused_ordering(391) 00:14:46.001 fused_ordering(392) 00:14:46.001 fused_ordering(393) 00:14:46.001 fused_ordering(394) 00:14:46.001 fused_ordering(395) 00:14:46.001 fused_ordering(396) 00:14:46.001 fused_ordering(397) 00:14:46.001 fused_ordering(398) 00:14:46.001 fused_ordering(399) 00:14:46.001 fused_ordering(400) 00:14:46.001 fused_ordering(401) 00:14:46.001 fused_ordering(402) 00:14:46.001 fused_ordering(403) 00:14:46.001 fused_ordering(404) 00:14:46.001 fused_ordering(405) 00:14:46.001 fused_ordering(406) 00:14:46.002 fused_ordering(407) 00:14:46.002 fused_ordering(408) 00:14:46.002 fused_ordering(409) 00:14:46.002 fused_ordering(410) 00:14:46.260 fused_ordering(411) 00:14:46.260 fused_ordering(412) 00:14:46.260 fused_ordering(413) 00:14:46.260 fused_ordering(414) 00:14:46.260 fused_ordering(415) 00:14:46.260 fused_ordering(416) 00:14:46.260 fused_ordering(417) 00:14:46.260 fused_ordering(418) 00:14:46.260 fused_ordering(419) 00:14:46.260 fused_ordering(420) 00:14:46.260 fused_ordering(421) 00:14:46.260 fused_ordering(422) 00:14:46.260 fused_ordering(423) 00:14:46.260 fused_ordering(424) 00:14:46.260 fused_ordering(425) 00:14:46.260 fused_ordering(426) 00:14:46.260 fused_ordering(427) 00:14:46.260 fused_ordering(428) 00:14:46.260 fused_ordering(429) 00:14:46.260 fused_ordering(430) 00:14:46.260 fused_ordering(431) 00:14:46.260 fused_ordering(432) 00:14:46.260 fused_ordering(433) 00:14:46.260 fused_ordering(434) 00:14:46.260 fused_ordering(435) 00:14:46.260 fused_ordering(436) 00:14:46.260 fused_ordering(437) 00:14:46.260 fused_ordering(438) 00:14:46.260 fused_ordering(439) 00:14:46.260 fused_ordering(440) 00:14:46.260 fused_ordering(441) 00:14:46.260 fused_ordering(442) 00:14:46.260 fused_ordering(443) 00:14:46.260 fused_ordering(444) 00:14:46.260 fused_ordering(445) 00:14:46.260 fused_ordering(446) 00:14:46.260 fused_ordering(447) 00:14:46.260 fused_ordering(448) 00:14:46.260 fused_ordering(449) 00:14:46.260 fused_ordering(450) 00:14:46.260 fused_ordering(451) 00:14:46.260 fused_ordering(452) 00:14:46.260 fused_ordering(453) 00:14:46.260 fused_ordering(454) 00:14:46.260 fused_ordering(455) 00:14:46.260 fused_ordering(456) 00:14:46.260 fused_ordering(457) 00:14:46.260 fused_ordering(458) 00:14:46.260 fused_ordering(459) 00:14:46.260 fused_ordering(460) 00:14:46.260 fused_ordering(461) 00:14:46.260 fused_ordering(462) 00:14:46.260 fused_ordering(463) 00:14:46.260 fused_ordering(464) 00:14:46.260 fused_ordering(465) 00:14:46.260 fused_ordering(466) 00:14:46.260 fused_ordering(467) 00:14:46.260 fused_ordering(468) 00:14:46.260 fused_ordering(469) 00:14:46.260 fused_ordering(470) 00:14:46.260 fused_ordering(471) 00:14:46.260 fused_ordering(472) 00:14:46.260 fused_ordering(473) 00:14:46.260 fused_ordering(474) 00:14:46.260 fused_ordering(475) 00:14:46.260 fused_ordering(476) 00:14:46.260 fused_ordering(477) 00:14:46.260 fused_ordering(478) 00:14:46.260 fused_ordering(479) 00:14:46.260 fused_ordering(480) 00:14:46.260 fused_ordering(481) 00:14:46.260 fused_ordering(482) 00:14:46.260 fused_ordering(483) 00:14:46.260 fused_ordering(484) 00:14:46.260 fused_ordering(485) 00:14:46.260 fused_ordering(486) 00:14:46.260 fused_ordering(487) 00:14:46.260 fused_ordering(488) 00:14:46.260 fused_ordering(489) 00:14:46.260 fused_ordering(490) 00:14:46.260 fused_ordering(491) 00:14:46.260 fused_ordering(492) 00:14:46.260 fused_ordering(493) 00:14:46.260 fused_ordering(494) 00:14:46.260 fused_ordering(495) 00:14:46.260 fused_ordering(496) 00:14:46.260 fused_ordering(497) 00:14:46.260 fused_ordering(498) 00:14:46.260 fused_ordering(499) 00:14:46.260 fused_ordering(500) 00:14:46.260 fused_ordering(501) 00:14:46.260 fused_ordering(502) 00:14:46.260 fused_ordering(503) 00:14:46.260 fused_ordering(504) 00:14:46.260 fused_ordering(505) 00:14:46.260 fused_ordering(506) 00:14:46.260 fused_ordering(507) 00:14:46.260 fused_ordering(508) 00:14:46.260 fused_ordering(509) 00:14:46.260 fused_ordering(510) 00:14:46.260 fused_ordering(511) 00:14:46.260 fused_ordering(512) 00:14:46.260 fused_ordering(513) 00:14:46.260 fused_ordering(514) 00:14:46.260 fused_ordering(515) 00:14:46.260 fused_ordering(516) 00:14:46.260 fused_ordering(517) 00:14:46.260 fused_ordering(518) 00:14:46.260 fused_ordering(519) 00:14:46.260 fused_ordering(520) 00:14:46.260 fused_ordering(521) 00:14:46.260 fused_ordering(522) 00:14:46.260 fused_ordering(523) 00:14:46.260 fused_ordering(524) 00:14:46.260 fused_ordering(525) 00:14:46.260 fused_ordering(526) 00:14:46.260 fused_ordering(527) 00:14:46.260 fused_ordering(528) 00:14:46.260 fused_ordering(529) 00:14:46.260 fused_ordering(530) 00:14:46.260 fused_ordering(531) 00:14:46.260 fused_ordering(532) 00:14:46.260 fused_ordering(533) 00:14:46.260 fused_ordering(534) 00:14:46.260 fused_ordering(535) 00:14:46.260 fused_ordering(536) 00:14:46.260 fused_ordering(537) 00:14:46.260 fused_ordering(538) 00:14:46.260 fused_ordering(539) 00:14:46.260 fused_ordering(540) 00:14:46.260 fused_ordering(541) 00:14:46.260 fused_ordering(542) 00:14:46.260 fused_ordering(543) 00:14:46.260 fused_ordering(544) 00:14:46.260 fused_ordering(545) 00:14:46.260 fused_ordering(546) 00:14:46.260 fused_ordering(547) 00:14:46.260 fused_ordering(548) 00:14:46.260 fused_ordering(549) 00:14:46.260 fused_ordering(550) 00:14:46.260 fused_ordering(551) 00:14:46.260 fused_ordering(552) 00:14:46.260 fused_ordering(553) 00:14:46.260 fused_ordering(554) 00:14:46.260 fused_ordering(555) 00:14:46.260 fused_ordering(556) 00:14:46.260 fused_ordering(557) 00:14:46.260 fused_ordering(558) 00:14:46.260 fused_ordering(559) 00:14:46.260 fused_ordering(560) 00:14:46.260 fused_ordering(561) 00:14:46.260 fused_ordering(562) 00:14:46.260 fused_ordering(563) 00:14:46.260 fused_ordering(564) 00:14:46.260 fused_ordering(565) 00:14:46.260 fused_ordering(566) 00:14:46.260 fused_ordering(567) 00:14:46.260 fused_ordering(568) 00:14:46.260 fused_ordering(569) 00:14:46.260 fused_ordering(570) 00:14:46.260 fused_ordering(571) 00:14:46.260 fused_ordering(572) 00:14:46.260 fused_ordering(573) 00:14:46.260 fused_ordering(574) 00:14:46.260 fused_ordering(575) 00:14:46.260 fused_ordering(576) 00:14:46.260 fused_ordering(577) 00:14:46.260 fused_ordering(578) 00:14:46.260 fused_ordering(579) 00:14:46.260 fused_ordering(580) 00:14:46.260 fused_ordering(581) 00:14:46.260 fused_ordering(582) 00:14:46.260 fused_ordering(583) 00:14:46.260 fused_ordering(584) 00:14:46.260 fused_ordering(585) 00:14:46.260 fused_ordering(586) 00:14:46.260 fused_ordering(587) 00:14:46.260 fused_ordering(588) 00:14:46.260 fused_ordering(589) 00:14:46.260 fused_ordering(590) 00:14:46.260 fused_ordering(591) 00:14:46.260 fused_ordering(592) 00:14:46.260 fused_ordering(593) 00:14:46.260 fused_ordering(594) 00:14:46.260 fused_ordering(595) 00:14:46.260 fused_ordering(596) 00:14:46.260 fused_ordering(597) 00:14:46.260 fused_ordering(598) 00:14:46.260 fused_ordering(599) 00:14:46.260 fused_ordering(600) 00:14:46.260 fused_ordering(601) 00:14:46.260 fused_ordering(602) 00:14:46.260 fused_ordering(603) 00:14:46.260 fused_ordering(604) 00:14:46.260 fused_ordering(605) 00:14:46.260 fused_ordering(606) 00:14:46.260 fused_ordering(607) 00:14:46.260 fused_ordering(608) 00:14:46.260 fused_ordering(609) 00:14:46.260 fused_ordering(610) 00:14:46.260 fused_ordering(611) 00:14:46.260 fused_ordering(612) 00:14:46.260 fused_ordering(613) 00:14:46.260 fused_ordering(614) 00:14:46.260 fused_ordering(615) 00:14:46.828 fused_ordering(616) 00:14:46.828 fused_ordering(617) 00:14:46.828 fused_ordering(618) 00:14:46.828 fused_ordering(619) 00:14:46.828 fused_ordering(620) 00:14:46.828 fused_ordering(621) 00:14:46.828 fused_ordering(622) 00:14:46.828 fused_ordering(623) 00:14:46.828 fused_ordering(624) 00:14:46.828 fused_ordering(625) 00:14:46.828 fused_ordering(626) 00:14:46.828 fused_ordering(627) 00:14:46.828 fused_ordering(628) 00:14:46.828 fused_ordering(629) 00:14:46.828 fused_ordering(630) 00:14:46.828 fused_ordering(631) 00:14:46.828 fused_ordering(632) 00:14:46.828 fused_ordering(633) 00:14:46.828 fused_ordering(634) 00:14:46.828 fused_ordering(635) 00:14:46.828 fused_ordering(636) 00:14:46.828 fused_ordering(637) 00:14:46.828 fused_ordering(638) 00:14:46.828 fused_ordering(639) 00:14:46.828 fused_ordering(640) 00:14:46.828 fused_ordering(641) 00:14:46.828 fused_ordering(642) 00:14:46.828 fused_ordering(643) 00:14:46.828 fused_ordering(644) 00:14:46.828 fused_ordering(645) 00:14:46.828 fused_ordering(646) 00:14:46.828 fused_ordering(647) 00:14:46.828 fused_ordering(648) 00:14:46.828 fused_ordering(649) 00:14:46.828 fused_ordering(650) 00:14:46.828 fused_ordering(651) 00:14:46.828 fused_ordering(652) 00:14:46.828 fused_ordering(653) 00:14:46.828 fused_ordering(654) 00:14:46.828 fused_ordering(655) 00:14:46.828 fused_ordering(656) 00:14:46.828 fused_ordering(657) 00:14:46.828 fused_ordering(658) 00:14:46.828 fused_ordering(659) 00:14:46.828 fused_ordering(660) 00:14:46.828 fused_ordering(661) 00:14:46.828 fused_ordering(662) 00:14:46.828 fused_ordering(663) 00:14:46.828 fused_ordering(664) 00:14:46.828 fused_ordering(665) 00:14:46.828 fused_ordering(666) 00:14:46.828 fused_ordering(667) 00:14:46.828 fused_ordering(668) 00:14:46.828 fused_ordering(669) 00:14:46.828 fused_ordering(670) 00:14:46.828 fused_ordering(671) 00:14:46.828 fused_ordering(672) 00:14:46.828 fused_ordering(673) 00:14:46.828 fused_ordering(674) 00:14:46.828 fused_ordering(675) 00:14:46.828 fused_ordering(676) 00:14:46.828 fused_ordering(677) 00:14:46.828 fused_ordering(678) 00:14:46.828 fused_ordering(679) 00:14:46.828 fused_ordering(680) 00:14:46.828 fused_ordering(681) 00:14:46.828 fused_ordering(682) 00:14:46.828 fused_ordering(683) 00:14:46.828 fused_ordering(684) 00:14:46.828 fused_ordering(685) 00:14:46.828 fused_ordering(686) 00:14:46.828 fused_ordering(687) 00:14:46.828 fused_ordering(688) 00:14:46.828 fused_ordering(689) 00:14:46.828 fused_ordering(690) 00:14:46.828 fused_ordering(691) 00:14:46.828 fused_ordering(692) 00:14:46.828 fused_ordering(693) 00:14:46.828 fused_ordering(694) 00:14:46.828 fused_ordering(695) 00:14:46.828 fused_ordering(696) 00:14:46.828 fused_ordering(697) 00:14:46.828 fused_ordering(698) 00:14:46.828 fused_ordering(699) 00:14:46.828 fused_ordering(700) 00:14:46.828 fused_ordering(701) 00:14:46.828 fused_ordering(702) 00:14:46.828 fused_ordering(703) 00:14:46.828 fused_ordering(704) 00:14:46.828 fused_ordering(705) 00:14:46.828 fused_ordering(706) 00:14:46.828 fused_ordering(707) 00:14:46.828 fused_ordering(708) 00:14:46.828 fused_ordering(709) 00:14:46.828 fused_ordering(710) 00:14:46.828 fused_ordering(711) 00:14:46.828 fused_ordering(712) 00:14:46.828 fused_ordering(713) 00:14:46.828 fused_ordering(714) 00:14:46.828 fused_ordering(715) 00:14:46.828 fused_ordering(716) 00:14:46.828 fused_ordering(717) 00:14:46.828 fused_ordering(718) 00:14:46.828 fused_ordering(719) 00:14:46.828 fused_ordering(720) 00:14:46.828 fused_ordering(721) 00:14:46.828 fused_ordering(722) 00:14:46.828 fused_ordering(723) 00:14:46.828 fused_ordering(724) 00:14:46.828 fused_ordering(725) 00:14:46.828 fused_ordering(726) 00:14:46.828 fused_ordering(727) 00:14:46.828 fused_ordering(728) 00:14:46.828 fused_ordering(729) 00:14:46.828 fused_ordering(730) 00:14:46.828 fused_ordering(731) 00:14:46.828 fused_ordering(732) 00:14:46.828 fused_ordering(733) 00:14:46.828 fused_ordering(734) 00:14:46.828 fused_ordering(735) 00:14:46.828 fused_ordering(736) 00:14:46.828 fused_ordering(737) 00:14:46.828 fused_ordering(738) 00:14:46.828 fused_ordering(739) 00:14:46.828 fused_ordering(740) 00:14:46.828 fused_ordering(741) 00:14:46.828 fused_ordering(742) 00:14:46.828 fused_ordering(743) 00:14:46.828 fused_ordering(744) 00:14:46.828 fused_ordering(745) 00:14:46.828 fused_ordering(746) 00:14:46.829 fused_ordering(747) 00:14:46.829 fused_ordering(748) 00:14:46.829 fused_ordering(749) 00:14:46.829 fused_ordering(750) 00:14:46.829 fused_ordering(751) 00:14:46.829 fused_ordering(752) 00:14:46.829 fused_ordering(753) 00:14:46.829 fused_ordering(754) 00:14:46.829 fused_ordering(755) 00:14:46.829 fused_ordering(756) 00:14:46.829 fused_ordering(757) 00:14:46.829 fused_ordering(758) 00:14:46.829 fused_ordering(759) 00:14:46.829 fused_ordering(760) 00:14:46.829 fused_ordering(761) 00:14:46.829 fused_ordering(762) 00:14:46.829 fused_ordering(763) 00:14:46.829 fused_ordering(764) 00:14:46.829 fused_ordering(765) 00:14:46.829 fused_ordering(766) 00:14:46.829 fused_ordering(767) 00:14:46.829 fused_ordering(768) 00:14:46.829 fused_ordering(769) 00:14:46.829 fused_ordering(770) 00:14:46.829 fused_ordering(771) 00:14:46.829 fused_ordering(772) 00:14:46.829 fused_ordering(773) 00:14:46.829 fused_ordering(774) 00:14:46.829 fused_ordering(775) 00:14:46.829 fused_ordering(776) 00:14:46.829 fused_ordering(777) 00:14:46.829 fused_ordering(778) 00:14:46.829 fused_ordering(779) 00:14:46.829 fused_ordering(780) 00:14:46.829 fused_ordering(781) 00:14:46.829 fused_ordering(782) 00:14:46.829 fused_ordering(783) 00:14:46.829 fused_ordering(784) 00:14:46.829 fused_ordering(785) 00:14:46.829 fused_ordering(786) 00:14:46.829 fused_ordering(787) 00:14:46.829 fused_ordering(788) 00:14:46.829 fused_ordering(789) 00:14:46.829 fused_ordering(790) 00:14:46.829 fused_ordering(791) 00:14:46.829 fused_ordering(792) 00:14:46.829 fused_ordering(793) 00:14:46.829 fused_ordering(794) 00:14:46.829 fused_ordering(795) 00:14:46.829 fused_ordering(796) 00:14:46.829 fused_ordering(797) 00:14:46.829 fused_ordering(798) 00:14:46.829 fused_ordering(799) 00:14:46.829 fused_ordering(800) 00:14:46.829 fused_ordering(801) 00:14:46.829 fused_ordering(802) 00:14:46.829 fused_ordering(803) 00:14:46.829 fused_ordering(804) 00:14:46.829 fused_ordering(805) 00:14:46.829 fused_ordering(806) 00:14:46.829 fused_ordering(807) 00:14:46.829 fused_ordering(808) 00:14:46.829 fused_ordering(809) 00:14:46.829 fused_ordering(810) 00:14:46.829 fused_ordering(811) 00:14:46.829 fused_ordering(812) 00:14:46.829 fused_ordering(813) 00:14:46.829 fused_ordering(814) 00:14:46.829 fused_ordering(815) 00:14:46.829 fused_ordering(816) 00:14:46.829 fused_ordering(817) 00:14:46.829 fused_ordering(818) 00:14:46.829 fused_ordering(819) 00:14:46.829 fused_ordering(820) 00:14:47.765 fused_ordering(821) 00:14:47.765 fused_ordering(822) 00:14:47.765 fused_ordering(823) 00:14:47.765 fused_ordering(824) 00:14:47.765 fused_ordering(825) 00:14:47.765 fused_ordering(826) 00:14:47.765 fused_ordering(827) 00:14:47.765 fused_ordering(828) 00:14:47.765 fused_ordering(829) 00:14:47.765 fused_ordering(830) 00:14:47.765 fused_ordering(831) 00:14:47.765 fused_ordering(832) 00:14:47.765 fused_ordering(833) 00:14:47.765 fused_ordering(834) 00:14:47.765 fused_ordering(835) 00:14:47.765 fused_ordering(836) 00:14:47.765 fused_ordering(837) 00:14:47.765 fused_ordering(838) 00:14:47.765 fused_ordering(839) 00:14:47.765 fused_ordering(840) 00:14:47.765 fused_ordering(841) 00:14:47.765 fused_ordering(842) 00:14:47.765 fused_ordering(843) 00:14:47.765 fused_ordering(844) 00:14:47.765 fused_ordering(845) 00:14:47.765 fused_ordering(846) 00:14:47.765 fused_ordering(847) 00:14:47.765 fused_ordering(848) 00:14:47.765 fused_ordering(849) 00:14:47.765 fused_ordering(850) 00:14:47.765 fused_ordering(851) 00:14:47.765 fused_ordering(852) 00:14:47.765 fused_ordering(853) 00:14:47.765 fused_ordering(854) 00:14:47.765 fused_ordering(855) 00:14:47.765 fused_ordering(856) 00:14:47.765 fused_ordering(857) 00:14:47.765 fused_ordering(858) 00:14:47.765 fused_ordering(859) 00:14:47.765 fused_ordering(860) 00:14:47.765 fused_ordering(861) 00:14:47.765 fused_ordering(862) 00:14:47.765 fused_ordering(863) 00:14:47.765 fused_ordering(864) 00:14:47.765 fused_ordering(865) 00:14:47.765 fused_ordering(866) 00:14:47.765 fused_ordering(867) 00:14:47.765 fused_ordering(868) 00:14:47.765 fused_ordering(869) 00:14:47.765 fused_ordering(870) 00:14:47.765 fused_ordering(871) 00:14:47.765 fused_ordering(872) 00:14:47.765 fused_ordering(873) 00:14:47.765 fused_ordering(874) 00:14:47.765 fused_ordering(875) 00:14:47.765 fused_ordering(876) 00:14:47.765 fused_ordering(877) 00:14:47.765 fused_ordering(878) 00:14:47.765 fused_ordering(879) 00:14:47.765 fused_ordering(880) 00:14:47.765 fused_ordering(881) 00:14:47.765 fused_ordering(882) 00:14:47.765 fused_ordering(883) 00:14:47.765 fused_ordering(884) 00:14:47.765 fused_ordering(885) 00:14:47.765 fused_ordering(886) 00:14:47.765 fused_ordering(887) 00:14:47.765 fused_ordering(888) 00:14:47.765 fused_ordering(889) 00:14:47.765 fused_ordering(890) 00:14:47.765 fused_ordering(891) 00:14:47.765 fused_ordering(892) 00:14:47.765 fused_ordering(893) 00:14:47.765 fused_ordering(894) 00:14:47.765 fused_ordering(895) 00:14:47.765 fused_ordering(896) 00:14:47.765 fused_ordering(897) 00:14:47.765 fused_ordering(898) 00:14:47.765 fused_ordering(899) 00:14:47.765 fused_ordering(900) 00:14:47.765 fused_ordering(901) 00:14:47.765 fused_ordering(902) 00:14:47.765 fused_ordering(903) 00:14:47.765 fused_ordering(904) 00:14:47.765 fused_ordering(905) 00:14:47.765 fused_ordering(906) 00:14:47.765 fused_ordering(907) 00:14:47.765 fused_ordering(908) 00:14:47.765 fused_ordering(909) 00:14:47.765 fused_ordering(910) 00:14:47.765 fused_ordering(911) 00:14:47.765 fused_ordering(912) 00:14:47.765 fused_ordering(913) 00:14:47.765 fused_ordering(914) 00:14:47.765 fused_ordering(915) 00:14:47.765 fused_ordering(916) 00:14:47.765 fused_ordering(917) 00:14:47.765 fused_ordering(918) 00:14:47.765 fused_ordering(919) 00:14:47.765 fused_ordering(920) 00:14:47.765 fused_ordering(921) 00:14:47.765 fused_ordering(922) 00:14:47.765 fused_ordering(923) 00:14:47.765 fused_ordering(924) 00:14:47.765 fused_ordering(925) 00:14:47.765 fused_ordering(926) 00:14:47.765 fused_ordering(927) 00:14:47.765 fused_ordering(928) 00:14:47.765 fused_ordering(929) 00:14:47.765 fused_ordering(930) 00:14:47.765 fused_ordering(931) 00:14:47.765 fused_ordering(932) 00:14:47.765 fused_ordering(933) 00:14:47.765 fused_ordering(934) 00:14:47.765 fused_ordering(935) 00:14:47.765 fused_ordering(936) 00:14:47.765 fused_ordering(937) 00:14:47.765 fused_ordering(938) 00:14:47.765 fused_ordering(939) 00:14:47.765 fused_ordering(940) 00:14:47.765 fused_ordering(941) 00:14:47.765 fused_ordering(942) 00:14:47.765 fused_ordering(943) 00:14:47.765 fused_ordering(944) 00:14:47.765 fused_ordering(945) 00:14:47.765 fused_ordering(946) 00:14:47.765 fused_ordering(947) 00:14:47.765 fused_ordering(948) 00:14:47.765 fused_ordering(949) 00:14:47.765 fused_ordering(950) 00:14:47.765 fused_ordering(951) 00:14:47.765 fused_ordering(952) 00:14:47.765 fused_ordering(953) 00:14:47.765 fused_ordering(954) 00:14:47.765 fused_ordering(955) 00:14:47.765 fused_ordering(956) 00:14:47.765 fused_ordering(957) 00:14:47.765 fused_ordering(958) 00:14:47.765 fused_ordering(959) 00:14:47.765 fused_ordering(960) 00:14:47.766 fused_ordering(961) 00:14:47.766 fused_ordering(962) 00:14:47.766 fused_ordering(963) 00:14:47.766 fused_ordering(964) 00:14:47.766 fused_ordering(965) 00:14:47.766 fused_ordering(966) 00:14:47.766 fused_ordering(967) 00:14:47.766 fused_ordering(968) 00:14:47.766 fused_ordering(969) 00:14:47.766 fused_ordering(970) 00:14:47.766 fused_ordering(971) 00:14:47.766 fused_ordering(972) 00:14:47.766 fused_ordering(973) 00:14:47.766 fused_ordering(974) 00:14:47.766 fused_ordering(975) 00:14:47.766 fused_ordering(976) 00:14:47.766 fused_ordering(977) 00:14:47.766 fused_ordering(978) 00:14:47.766 fused_ordering(979) 00:14:47.766 fused_ordering(980) 00:14:47.766 fused_ordering(981) 00:14:47.766 fused_ordering(982) 00:14:47.766 fused_ordering(983) 00:14:47.766 fused_ordering(984) 00:14:47.766 fused_ordering(985) 00:14:47.766 fused_ordering(986) 00:14:47.766 fused_ordering(987) 00:14:47.766 fused_ordering(988) 00:14:47.766 fused_ordering(989) 00:14:47.766 fused_ordering(990) 00:14:47.766 fused_ordering(991) 00:14:47.766 fused_ordering(992) 00:14:47.766 fused_ordering(993) 00:14:47.766 fused_ordering(994) 00:14:47.766 fused_ordering(995) 00:14:47.766 fused_ordering(996) 00:14:47.766 fused_ordering(997) 00:14:47.766 fused_ordering(998) 00:14:47.766 fused_ordering(999) 00:14:47.766 fused_ordering(1000) 00:14:47.766 fused_ordering(1001) 00:14:47.766 fused_ordering(1002) 00:14:47.766 fused_ordering(1003) 00:14:47.766 fused_ordering(1004) 00:14:47.766 fused_ordering(1005) 00:14:47.766 fused_ordering(1006) 00:14:47.766 fused_ordering(1007) 00:14:47.766 fused_ordering(1008) 00:14:47.766 fused_ordering(1009) 00:14:47.766 fused_ordering(1010) 00:14:47.766 fused_ordering(1011) 00:14:47.766 fused_ordering(1012) 00:14:47.766 fused_ordering(1013) 00:14:47.766 fused_ordering(1014) 00:14:47.766 fused_ordering(1015) 00:14:47.766 fused_ordering(1016) 00:14:47.766 fused_ordering(1017) 00:14:47.766 fused_ordering(1018) 00:14:47.766 fused_ordering(1019) 00:14:47.766 fused_ordering(1020) 00:14:47.766 fused_ordering(1021) 00:14:47.766 fused_ordering(1022) 00:14:47.766 fused_ordering(1023) 00:14:47.766 17:25:26 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:47.766 17:25:26 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:47.766 17:25:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:47.766 17:25:26 -- nvmf/common.sh@116 -- # sync 00:14:47.766 17:25:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:47.766 17:25:26 -- nvmf/common.sh@119 -- # set +e 00:14:47.766 17:25:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:47.766 17:25:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:47.766 rmmod nvme_tcp 00:14:47.766 rmmod nvme_fabrics 00:14:47.766 rmmod nvme_keyring 00:14:47.766 17:25:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:47.766 17:25:26 -- nvmf/common.sh@123 -- # set -e 00:14:47.766 17:25:26 -- nvmf/common.sh@124 -- # return 0 00:14:47.766 17:25:26 -- nvmf/common.sh@477 -- # '[' -n 4053917 ']' 00:14:47.766 17:25:26 -- nvmf/common.sh@478 -- # killprocess 4053917 00:14:47.766 17:25:26 -- common/autotest_common.sh@926 -- # '[' -z 4053917 ']' 00:14:47.766 17:25:26 -- common/autotest_common.sh@930 -- # kill -0 4053917 00:14:47.766 17:25:26 -- common/autotest_common.sh@931 -- # uname 00:14:47.766 17:25:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:47.766 17:25:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4053917 00:14:47.766 17:25:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:47.766 17:25:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:47.766 17:25:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4053917' 00:14:47.766 killing process with pid 4053917 00:14:47.766 17:25:26 -- common/autotest_common.sh@945 -- # kill 4053917 00:14:47.766 17:25:26 -- common/autotest_common.sh@950 -- # wait 4053917 00:14:47.766 17:25:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:47.766 17:25:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:47.766 17:25:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:47.766 17:25:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.766 17:25:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:47.766 17:25:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.766 17:25:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.766 17:25:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.302 17:25:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:50.302 00:14:50.302 real 0m11.978s 00:14:50.302 user 0m7.022s 00:14:50.302 sys 0m6.104s 00:14:50.302 17:25:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.302 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:14:50.302 ************************************ 00:14:50.302 END TEST nvmf_fused_ordering 00:14:50.302 ************************************ 00:14:50.302 17:25:28 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:50.302 17:25:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:50.302 17:25:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:50.302 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:14:50.302 ************************************ 00:14:50.302 START TEST nvmf_delete_subsystem 00:14:50.302 ************************************ 00:14:50.302 17:25:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:50.302 * Looking for test storage... 00:14:50.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.302 17:25:28 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.302 17:25:28 -- nvmf/common.sh@7 -- # uname -s 00:14:50.302 17:25:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.302 17:25:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.302 17:25:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.302 17:25:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.302 17:25:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.302 17:25:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.302 17:25:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.302 17:25:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.302 17:25:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.302 17:25:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.302 17:25:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:50.302 17:25:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:50.302 17:25:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.302 17:25:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.302 17:25:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.302 17:25:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.302 17:25:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.302 17:25:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.302 17:25:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.302 17:25:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.302 17:25:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.302 17:25:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.302 17:25:28 -- paths/export.sh@5 -- # export PATH 00:14:50.302 17:25:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.302 17:25:28 -- nvmf/common.sh@46 -- # : 0 00:14:50.302 17:25:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:50.302 17:25:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:50.302 17:25:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:50.302 17:25:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.302 17:25:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.302 17:25:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:50.302 17:25:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:50.302 17:25:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:50.302 17:25:28 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:50.302 17:25:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:50.302 17:25:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.302 17:25:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:50.302 17:25:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:50.302 17:25:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:50.302 17:25:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.302 17:25:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.302 17:25:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.302 17:25:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:50.302 17:25:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:50.302 17:25:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:50.302 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:14:55.578 17:25:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:55.578 17:25:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:55.578 17:25:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:55.578 17:25:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:55.578 17:25:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:55.578 17:25:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:55.578 17:25:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:55.578 17:25:34 -- nvmf/common.sh@294 -- # net_devs=() 00:14:55.578 17:25:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:55.578 17:25:34 -- nvmf/common.sh@295 -- # e810=() 00:14:55.578 17:25:34 -- nvmf/common.sh@295 -- # local -ga e810 00:14:55.578 17:25:34 -- nvmf/common.sh@296 -- # x722=() 00:14:55.578 17:25:34 -- nvmf/common.sh@296 -- # local -ga x722 00:14:55.578 17:25:34 -- nvmf/common.sh@297 -- # mlx=() 00:14:55.578 17:25:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:55.578 17:25:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.578 17:25:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:55.578 17:25:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:55.578 17:25:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:55.578 17:25:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:55.578 17:25:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:55.578 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:55.578 17:25:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:55.578 17:25:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:55.578 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:55.578 17:25:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:55.578 17:25:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:55.578 17:25:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.578 17:25:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:55.578 17:25:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.578 17:25:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:55.578 Found net devices under 0000:af:00.0: cvl_0_0 00:14:55.578 17:25:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.578 17:25:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:55.578 17:25:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.578 17:25:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:55.578 17:25:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.578 17:25:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:55.578 Found net devices under 0000:af:00.1: cvl_0_1 00:14:55.578 17:25:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.578 17:25:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:55.578 17:25:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:55.578 17:25:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:55.578 17:25:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:55.578 17:25:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.578 17:25:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.578 17:25:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.578 17:25:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:55.578 17:25:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.578 17:25:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.578 17:25:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:55.578 17:25:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.578 17:25:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.578 17:25:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:55.578 17:25:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:55.578 17:25:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.578 17:25:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.578 17:25:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.578 17:25:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.578 17:25:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:55.578 17:25:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.838 17:25:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.838 17:25:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.838 17:25:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:55.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:14:55.838 00:14:55.838 --- 10.0.0.2 ping statistics --- 00:14:55.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.838 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:55.838 17:25:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:14:55.838 00:14:55.838 --- 10.0.0.1 ping statistics --- 00:14:55.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.838 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:14:55.838 17:25:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.838 17:25:34 -- nvmf/common.sh@410 -- # return 0 00:14:55.838 17:25:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:55.838 17:25:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.838 17:25:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:55.838 17:25:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:55.838 17:25:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.838 17:25:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:55.838 17:25:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:55.838 17:25:34 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:55.838 17:25:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:55.838 17:25:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:55.838 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:14:55.838 17:25:34 -- nvmf/common.sh@469 -- # nvmfpid=4058345 00:14:55.838 17:25:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:55.838 17:25:34 -- nvmf/common.sh@470 -- # waitforlisten 4058345 00:14:55.838 17:25:34 -- common/autotest_common.sh@819 -- # '[' -z 4058345 ']' 00:14:55.838 17:25:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.838 17:25:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.838 17:25:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.838 17:25:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.838 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:14:55.838 [2024-07-12 17:25:34.721597] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:55.838 [2024-07-12 17:25:34.721651] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.838 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.097 [2024-07-12 17:25:34.809916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:56.097 [2024-07-12 17:25:34.852375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.097 [2024-07-12 17:25:34.852523] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.097 [2024-07-12 17:25:34.852537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.097 [2024-07-12 17:25:34.852546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.097 [2024-07-12 17:25:34.852591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.097 [2024-07-12 17:25:34.852597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.035 17:25:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:57.035 17:25:35 -- common/autotest_common.sh@852 -- # return 0 00:14:57.035 17:25:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:57.035 17:25:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:57.035 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.035 17:25:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.035 17:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.035 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.035 [2024-07-12 17:25:35.689068] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.035 17:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:57.035 17:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.035 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.035 17:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.035 17:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.035 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.035 [2024-07-12 17:25:35.705251] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.035 17:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:57.035 17:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.035 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.035 NULL1 00:14:57.035 17:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:57.035 17:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.035 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.035 Delay0 00:14:57.035 17:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.035 17:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.035 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.035 17:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@28 -- # perf_pid=4058494 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:57.035 17:25:35 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:57.035 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.035 [2024-07-12 17:25:35.779844] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:58.941 17:25:37 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.941 17:25:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.941 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 [2024-07-12 17:25:37.903359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee4bf0 is same with the state(5) to be set 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 starting I/O failed: -6 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 [2024-07-12 17:25:37.904120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0c800bf20 is same with the state(5) to be set 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Read completed with error (sct=0, sc=8) 00:14:58.941 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Write completed with error (sct=0, sc=8) 00:14:58.942 Read completed with error (sct=0, sc=8) 00:15:00.321 [2024-07-12 17:25:38.876699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee1670 is same with the state(5) to be set 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 [2024-07-12 17:25:38.905303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee42c0 is same with the state(5) to be set 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 [2024-07-12 17:25:38.905443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee4ea0 is same with the state(5) to be set 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 [2024-07-12 17:25:38.906059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0c800c1d0 is same with the state(5) to be set 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Write completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.321 Read completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 Read completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 Write completed with error (sct=0, sc=8) 00:15:00.322 [2024-07-12 17:25:38.906946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee4140 is same with the state(5) to be set 00:15:00.322 [2024-07-12 17:25:38.907449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee1670 (9): Bad file descriptor 00:15:00.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:00.322 17:25:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.322 17:25:38 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:00.322 17:25:38 -- target/delete_subsystem.sh@35 -- # kill -0 4058494 00:15:00.322 17:25:38 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:00.322 Initializing NVMe Controllers 00:15:00.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.322 Controller IO queue size 128, less than required. 00:15:00.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:00.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:00.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:00.322 Initialization complete. Launching workers. 00:15:00.322 ======================================================== 00:15:00.322 Latency(us) 00:15:00.322 Device Information : IOPS MiB/s Average min max 00:15:00.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.38 0.10 944460.85 1417.52 1014878.88 00:15:00.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.69 0.08 868562.30 333.51 1016271.91 00:15:00.322 ======================================================== 00:15:00.322 Total : 353.06 0.17 910562.34 333.51 1016271.91 00:15:00.322 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@35 -- # kill -0 4058494 00:15:00.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4058494) - No such process 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@45 -- # NOT wait 4058494 00:15:00.581 17:25:39 -- common/autotest_common.sh@640 -- # local es=0 00:15:00.581 17:25:39 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 4058494 00:15:00.581 17:25:39 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:00.581 17:25:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:00.581 17:25:39 -- common/autotest_common.sh@632 -- # type -t wait 00:15:00.581 17:25:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:00.581 17:25:39 -- common/autotest_common.sh@643 -- # wait 4058494 00:15:00.581 17:25:39 -- common/autotest_common.sh@643 -- # es=1 00:15:00.581 17:25:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:00.581 17:25:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:00.581 17:25:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:00.581 17:25:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.581 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:15:00.581 17:25:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.581 17:25:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.581 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:15:00.581 [2024-07-12 17:25:39.435503] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.581 17:25:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.581 17:25:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.581 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:15:00.581 17:25:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@54 -- # perf_pid=4059282 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@57 -- # kill -0 4059282 00:15:00.581 17:25:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.581 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.581 [2024-07-12 17:25:39.493883] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:01.150 17:25:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.150 17:25:39 -- target/delete_subsystem.sh@57 -- # kill -0 4059282 00:15:01.150 17:25:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.719 17:25:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.719 17:25:40 -- target/delete_subsystem.sh@57 -- # kill -0 4059282 00:15:01.719 17:25:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.287 17:25:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.287 17:25:40 -- target/delete_subsystem.sh@57 -- # kill -0 4059282 00:15:02.287 17:25:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.546 17:25:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.546 17:25:41 -- target/delete_subsystem.sh@57 -- # kill -0 4059282 00:15:02.546 17:25:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.116 17:25:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.116 17:25:41 -- target/delete_subsystem.sh@57 -- # kill -0 4059282 00:15:03.116 17:25:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.683 17:25:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.683 17:25:42 -- target/delete_subsystem.sh@57 -- # kill -0 4059282 00:15:03.683 17:25:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.942 Initializing NVMe Controllers 00:15:03.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.942 Controller IO queue size 128, less than required. 00:15:03.942 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:03.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:03.942 Initialization complete. Launching workers. 00:15:03.942 ======================================================== 00:15:03.942 Latency(us) 00:15:03.942 Device Information : IOPS MiB/s Average min max 00:15:03.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003682.77 1000188.93 1013317.42 00:15:03.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006107.77 1000273.04 1014731.94 00:15:03.942 ======================================================== 00:15:03.942 Total : 256.00 0.12 1004895.27 1000188.93 1014731.94 00:15:03.942 00:15:04.201 17:25:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.201 17:25:42 -- target/delete_subsystem.sh@57 -- # kill -0 4059282 00:15:04.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4059282) - No such process 00:15:04.201 17:25:42 -- target/delete_subsystem.sh@67 -- # wait 4059282 00:15:04.201 17:25:42 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:04.201 17:25:42 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:04.201 17:25:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:04.201 17:25:42 -- nvmf/common.sh@116 -- # sync 00:15:04.201 17:25:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:04.201 17:25:42 -- nvmf/common.sh@119 -- # set +e 00:15:04.201 17:25:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:04.201 17:25:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:04.201 rmmod nvme_tcp 00:15:04.201 rmmod nvme_fabrics 00:15:04.201 rmmod nvme_keyring 00:15:04.201 17:25:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:04.201 17:25:43 -- nvmf/common.sh@123 -- # set -e 00:15:04.201 17:25:43 -- nvmf/common.sh@124 -- # return 0 00:15:04.201 17:25:43 -- nvmf/common.sh@477 -- # '[' -n 4058345 ']' 00:15:04.201 17:25:43 -- nvmf/common.sh@478 -- # killprocess 4058345 00:15:04.201 17:25:43 -- common/autotest_common.sh@926 -- # '[' -z 4058345 ']' 00:15:04.201 17:25:43 -- common/autotest_common.sh@930 -- # kill -0 4058345 00:15:04.201 17:25:43 -- common/autotest_common.sh@931 -- # uname 00:15:04.201 17:25:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:04.201 17:25:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4058345 00:15:04.201 17:25:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:04.201 17:25:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:04.202 17:25:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4058345' 00:15:04.202 killing process with pid 4058345 00:15:04.202 17:25:43 -- common/autotest_common.sh@945 -- # kill 4058345 00:15:04.202 17:25:43 -- common/autotest_common.sh@950 -- # wait 4058345 00:15:04.461 17:25:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:04.461 17:25:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:04.461 17:25:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:04.461 17:25:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.461 17:25:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:04.461 17:25:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.461 17:25:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.461 17:25:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.389 17:25:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:06.687 00:15:06.687 real 0m16.515s 00:15:06.687 user 0m30.678s 00:15:06.687 sys 0m5.187s 00:15:06.687 17:25:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.687 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:06.687 ************************************ 00:15:06.687 END TEST nvmf_delete_subsystem 00:15:06.687 ************************************ 00:15:06.687 17:25:45 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:06.687 17:25:45 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:06.687 17:25:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:06.687 17:25:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:06.687 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:06.687 ************************************ 00:15:06.687 START TEST nvmf_nvme_cli 00:15:06.687 ************************************ 00:15:06.687 17:25:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:06.687 * Looking for test storage... 00:15:06.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.687 17:25:45 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.687 17:25:45 -- nvmf/common.sh@7 -- # uname -s 00:15:06.687 17:25:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.687 17:25:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.687 17:25:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.687 17:25:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.687 17:25:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.687 17:25:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.687 17:25:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.687 17:25:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.687 17:25:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.687 17:25:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.688 17:25:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:06.688 17:25:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:06.688 17:25:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.688 17:25:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.688 17:25:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.688 17:25:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.688 17:25:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.688 17:25:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.688 17:25:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.688 17:25:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.688 17:25:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.688 17:25:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.688 17:25:45 -- paths/export.sh@5 -- # export PATH 00:15:06.688 17:25:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.688 17:25:45 -- nvmf/common.sh@46 -- # : 0 00:15:06.688 17:25:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:06.688 17:25:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:06.688 17:25:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:06.688 17:25:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.688 17:25:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.688 17:25:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:06.688 17:25:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:06.688 17:25:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:06.688 17:25:45 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.688 17:25:45 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.688 17:25:45 -- target/nvme_cli.sh@14 -- # devs=() 00:15:06.688 17:25:45 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:06.688 17:25:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:06.688 17:25:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.688 17:25:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:06.688 17:25:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:06.688 17:25:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:06.688 17:25:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.688 17:25:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.688 17:25:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.688 17:25:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:06.688 17:25:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:06.688 17:25:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:06.688 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:11.962 17:25:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.962 17:25:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:11.962 17:25:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:11.962 17:25:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:11.962 17:25:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:11.962 17:25:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:11.962 17:25:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:11.962 17:25:50 -- nvmf/common.sh@294 -- # net_devs=() 00:15:11.962 17:25:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:11.962 17:25:50 -- nvmf/common.sh@295 -- # e810=() 00:15:11.962 17:25:50 -- nvmf/common.sh@295 -- # local -ga e810 00:15:11.962 17:25:50 -- nvmf/common.sh@296 -- # x722=() 00:15:11.962 17:25:50 -- nvmf/common.sh@296 -- # local -ga x722 00:15:11.962 17:25:50 -- nvmf/common.sh@297 -- # mlx=() 00:15:11.962 17:25:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:11.962 17:25:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.962 17:25:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:11.962 17:25:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:11.962 17:25:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:11.962 17:25:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:11.962 17:25:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:11.962 17:25:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:11.962 17:25:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.962 17:25:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:11.962 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:11.962 17:25:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.962 17:25:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.962 17:25:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.963 17:25:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:11.963 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:11.963 17:25:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:11.963 17:25:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.963 17:25:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.963 17:25:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.963 17:25:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.963 17:25:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:11.963 Found net devices under 0000:af:00.0: cvl_0_0 00:15:11.963 17:25:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.963 17:25:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.963 17:25:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.963 17:25:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.963 17:25:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.963 17:25:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:11.963 Found net devices under 0000:af:00.1: cvl_0_1 00:15:11.963 17:25:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.963 17:25:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:11.963 17:25:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:11.963 17:25:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:11.963 17:25:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.963 17:25:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.963 17:25:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.963 17:25:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:11.963 17:25:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.963 17:25:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.963 17:25:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:11.963 17:25:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.963 17:25:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.963 17:25:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:11.963 17:25:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:11.963 17:25:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.963 17:25:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.963 17:25:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.963 17:25:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.963 17:25:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:11.963 17:25:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.963 17:25:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.963 17:25:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.963 17:25:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:11.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:15:11.963 00:15:11.963 --- 10.0.0.2 ping statistics --- 00:15:11.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.963 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:15:11.963 17:25:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:15:11.963 00:15:11.963 --- 10.0.0.1 ping statistics --- 00:15:11.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.963 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:15:11.963 17:25:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.963 17:25:50 -- nvmf/common.sh@410 -- # return 0 00:15:11.963 17:25:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:11.963 17:25:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.963 17:25:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:11.963 17:25:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.963 17:25:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:11.963 17:25:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:11.963 17:25:50 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:11.963 17:25:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:11.963 17:25:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:11.963 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:15:11.963 17:25:50 -- nvmf/common.sh@469 -- # nvmfpid=4063371 00:15:11.963 17:25:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.963 17:25:50 -- nvmf/common.sh@470 -- # waitforlisten 4063371 00:15:11.963 17:25:50 -- common/autotest_common.sh@819 -- # '[' -z 4063371 ']' 00:15:11.963 17:25:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.963 17:25:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:11.963 17:25:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.963 17:25:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:11.963 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:15:11.963 [2024-07-12 17:25:50.663272] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:11.963 [2024-07-12 17:25:50.663309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.963 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.963 [2024-07-12 17:25:50.735942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.963 [2024-07-12 17:25:50.780446] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.963 [2024-07-12 17:25:50.780600] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.963 [2024-07-12 17:25:50.780611] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.963 [2024-07-12 17:25:50.780620] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.963 [2024-07-12 17:25:50.780667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.963 [2024-07-12 17:25:50.780757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.963 [2024-07-12 17:25:50.780847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.963 [2024-07-12 17:25:50.780849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.223 17:25:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.223 17:25:51 -- common/autotest_common.sh@852 -- # return 0 00:15:12.223 17:25:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.223 17:25:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:12.223 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.223 17:25:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.223 17:25:51 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.223 17:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.223 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.223 [2024-07-12 17:25:51.167536] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.223 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.223 17:25:51 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:12.223 17:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.223 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.483 Malloc0 00:15:12.483 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.483 17:25:51 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:12.483 17:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.483 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.483 Malloc1 00:15:12.483 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.483 17:25:51 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:12.483 17:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.483 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.483 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.483 17:25:51 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.483 17:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.483 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.483 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.483 17:25:51 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.483 17:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.483 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.483 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.483 17:25:51 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.483 17:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.483 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.483 [2024-07-12 17:25:51.249998] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.483 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.483 17:25:51 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.483 17:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.483 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.483 17:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.483 17:25:51 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:12.483 00:15:12.483 Discovery Log Number of Records 2, Generation counter 2 00:15:12.483 =====Discovery Log Entry 0====== 00:15:12.483 trtype: tcp 00:15:12.483 adrfam: ipv4 00:15:12.483 subtype: current discovery subsystem 00:15:12.483 treq: not required 00:15:12.483 portid: 0 00:15:12.483 trsvcid: 4420 00:15:12.483 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:12.483 traddr: 10.0.0.2 00:15:12.483 eflags: explicit discovery connections, duplicate discovery information 00:15:12.483 sectype: none 00:15:12.483 =====Discovery Log Entry 1====== 00:15:12.483 trtype: tcp 00:15:12.483 adrfam: ipv4 00:15:12.483 subtype: nvme subsystem 00:15:12.483 treq: not required 00:15:12.483 portid: 0 00:15:12.483 trsvcid: 4420 00:15:12.483 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:12.483 traddr: 10.0.0.2 00:15:12.483 eflags: none 00:15:12.483 sectype: none 00:15:12.483 17:25:51 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:12.483 17:25:51 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:12.483 17:25:51 -- nvmf/common.sh@510 -- # local dev _ 00:15:12.483 17:25:51 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.483 17:25:51 -- nvmf/common.sh@509 -- # nvme list 00:15:12.483 17:25:51 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:12.483 17:25:51 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.483 17:25:51 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:12.483 17:25:51 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.483 17:25:51 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:12.483 17:25:51 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.862 17:25:52 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:13.862 17:25:52 -- common/autotest_common.sh@1177 -- # local i=0 00:15:13.863 17:25:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.863 17:25:52 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:13.863 17:25:52 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:13.863 17:25:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:15.761 17:25:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:15.761 17:25:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:15.761 17:25:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.761 17:25:54 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:15.761 17:25:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.761 17:25:54 -- common/autotest_common.sh@1187 -- # return 0 00:15:15.761 17:25:54 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:15.761 17:25:54 -- nvmf/common.sh@510 -- # local dev _ 00:15:15.761 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:15.761 17:25:54 -- nvmf/common.sh@509 -- # nvme list 00:15:16.047 17:25:54 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:16.047 17:25:54 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:16.047 17:25:54 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:16.047 /dev/nvme0n1 ]] 00:15:16.047 17:25:54 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:16.047 17:25:54 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:16.047 17:25:54 -- nvmf/common.sh@510 -- # local dev _ 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- nvmf/common.sh@509 -- # nvme list 00:15:16.047 17:25:54 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:16.047 17:25:54 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:16.047 17:25:54 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:16.047 17:25:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:16.047 17:25:54 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:16.047 17:25:54 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.305 17:25:55 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.305 17:25:55 -- common/autotest_common.sh@1198 -- # local i=0 00:15:16.305 17:25:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:16.305 17:25:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.305 17:25:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:16.305 17:25:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.305 17:25:55 -- common/autotest_common.sh@1210 -- # return 0 00:15:16.305 17:25:55 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:16.305 17:25:55 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.305 17:25:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.305 17:25:55 -- common/autotest_common.sh@10 -- # set +x 00:15:16.305 17:25:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.305 17:25:55 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:16.305 17:25:55 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:16.305 17:25:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:16.305 17:25:55 -- nvmf/common.sh@116 -- # sync 00:15:16.305 17:25:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:16.305 17:25:55 -- nvmf/common.sh@119 -- # set +e 00:15:16.305 17:25:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:16.305 17:25:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:16.305 rmmod nvme_tcp 00:15:16.305 rmmod nvme_fabrics 00:15:16.305 rmmod nvme_keyring 00:15:16.305 17:25:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:16.305 17:25:55 -- nvmf/common.sh@123 -- # set -e 00:15:16.305 17:25:55 -- nvmf/common.sh@124 -- # return 0 00:15:16.305 17:25:55 -- nvmf/common.sh@477 -- # '[' -n 4063371 ']' 00:15:16.305 17:25:55 -- nvmf/common.sh@478 -- # killprocess 4063371 00:15:16.305 17:25:55 -- common/autotest_common.sh@926 -- # '[' -z 4063371 ']' 00:15:16.305 17:25:55 -- common/autotest_common.sh@930 -- # kill -0 4063371 00:15:16.305 17:25:55 -- common/autotest_common.sh@931 -- # uname 00:15:16.305 17:25:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:16.305 17:25:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4063371 00:15:16.305 17:25:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:16.305 17:25:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:16.305 17:25:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4063371' 00:15:16.305 killing process with pid 4063371 00:15:16.305 17:25:55 -- common/autotest_common.sh@945 -- # kill 4063371 00:15:16.305 17:25:55 -- common/autotest_common.sh@950 -- # wait 4063371 00:15:16.564 17:25:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:16.564 17:25:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:16.564 17:25:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:16.564 17:25:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.564 17:25:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:16.564 17:25:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.564 17:25:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.564 17:25:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.095 17:25:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:19.095 00:15:19.095 real 0m12.102s 00:15:19.095 user 0m19.924s 00:15:19.095 sys 0m4.549s 00:15:19.095 17:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.095 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:15:19.095 ************************************ 00:15:19.095 END TEST nvmf_nvme_cli 00:15:19.095 ************************************ 00:15:19.095 17:25:57 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:19.095 17:25:57 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:19.095 17:25:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:19.095 17:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:19.095 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:15:19.095 ************************************ 00:15:19.095 START TEST nvmf_vfio_user 00:15:19.095 ************************************ 00:15:19.095 17:25:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:19.095 * Looking for test storage... 00:15:19.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.095 17:25:57 -- nvmf/common.sh@7 -- # uname -s 00:15:19.095 17:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.095 17:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.095 17:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.095 17:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.095 17:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.095 17:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.095 17:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.095 17:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.095 17:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.095 17:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.095 17:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:19.095 17:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:19.095 17:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.095 17:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.095 17:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.095 17:25:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.095 17:25:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.095 17:25:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.095 17:25:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.095 17:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.095 17:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.095 17:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.095 17:25:57 -- paths/export.sh@5 -- # export PATH 00:15:19.095 17:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.095 17:25:57 -- nvmf/common.sh@46 -- # : 0 00:15:19.095 17:25:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:19.095 17:25:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:19.095 17:25:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:19.095 17:25:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.095 17:25:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.095 17:25:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:19.095 17:25:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:19.095 17:25:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4064768 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4064768' 00:15:19.095 Process pid: 4064768 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4064768 00:15:19.095 17:25:57 -- common/autotest_common.sh@819 -- # '[' -z 4064768 ']' 00:15:19.095 17:25:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.095 17:25:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:19.095 17:25:57 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:19.095 17:25:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.095 17:25:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:19.095 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:15:19.095 [2024-07-12 17:25:57.733129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:19.095 [2024-07-12 17:25:57.733249] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.095 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.095 [2024-07-12 17:25:57.853124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.095 [2024-07-12 17:25:57.897498] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:19.095 [2024-07-12 17:25:57.897649] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.095 [2024-07-12 17:25:57.897661] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.095 [2024-07-12 17:25:57.897670] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.095 [2024-07-12 17:25:57.897715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.095 [2024-07-12 17:25:57.897734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.095 [2024-07-12 17:25:57.897767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.095 [2024-07-12 17:25:57.897769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.663 17:25:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:19.663 17:25:58 -- common/autotest_common.sh@852 -- # return 0 00:15:19.663 17:25:58 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:20.598 17:25:59 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:20.857 17:25:59 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:20.857 17:25:59 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:20.857 17:25:59 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.857 17:25:59 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:20.857 17:25:59 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:21.117 Malloc1 00:15:21.117 17:25:59 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:21.376 17:26:00 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:21.634 17:26:00 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:21.893 17:26:00 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.893 17:26:00 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:21.893 17:26:00 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:22.152 Malloc2 00:15:22.152 17:26:01 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:22.410 17:26:01 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:22.669 17:26:01 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:22.929 17:26:01 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:22.929 17:26:01 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:22.929 17:26:01 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.929 17:26:01 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:22.929 17:26:01 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:22.929 17:26:01 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:22.929 [2024-07-12 17:26:01.718800] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:22.929 [2024-07-12 17:26:01.718846] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065583 ] 00:15:22.929 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.929 [2024-07-12 17:26:01.754739] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:22.929 [2024-07-12 17:26:01.764660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.929 [2024-07-12 17:26:01.764684] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9c485ac000 00:15:22.929 [2024-07-12 17:26:01.765655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.929 [2024-07-12 17:26:01.766652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.929 [2024-07-12 17:26:01.767660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.929 [2024-07-12 17:26:01.768671] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.929 [2024-07-12 17:26:01.769675] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.929 [2024-07-12 17:26:01.770684] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.929 [2024-07-12 17:26:01.771684] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.929 [2024-07-12 17:26:01.772690] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.929 [2024-07-12 17:26:01.773700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.929 [2024-07-12 17:26:01.773713] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9c47372000 00:15:22.929 [2024-07-12 17:26:01.775121] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.929 [2024-07-12 17:26:01.794931] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:22.929 [2024-07-12 17:26:01.794962] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:22.929 [2024-07-12 17:26:01.799894] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:22.929 [2024-07-12 17:26:01.799943] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:22.929 [2024-07-12 17:26:01.800048] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:22.929 [2024-07-12 17:26:01.800073] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:22.929 [2024-07-12 17:26:01.800081] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:22.929 [2024-07-12 17:26:01.800881] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:22.929 [2024-07-12 17:26:01.800893] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:22.929 [2024-07-12 17:26:01.800905] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:22.929 [2024-07-12 17:26:01.801889] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:22.929 [2024-07-12 17:26:01.801901] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:22.929 [2024-07-12 17:26:01.801910] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:22.929 [2024-07-12 17:26:01.802888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:22.929 [2024-07-12 17:26:01.802899] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:22.929 [2024-07-12 17:26:01.803895] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:22.929 [2024-07-12 17:26:01.803906] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:22.929 [2024-07-12 17:26:01.803913] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:22.929 [2024-07-12 17:26:01.803921] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:22.929 [2024-07-12 17:26:01.804028] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:22.929 [2024-07-12 17:26:01.804035] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:22.929 [2024-07-12 17:26:01.804041] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:22.929 [2024-07-12 17:26:01.804904] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:22.929 [2024-07-12 17:26:01.805906] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:22.929 [2024-07-12 17:26:01.806911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:22.929 [2024-07-12 17:26:01.807941] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:22.929 [2024-07-12 17:26:01.808922] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:22.929 [2024-07-12 17:26:01.808933] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:22.929 [2024-07-12 17:26:01.808939] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:22.929 [2024-07-12 17:26:01.808964] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:22.929 [2024-07-12 17:26:01.808974] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:22.929 [2024-07-12 17:26:01.808990] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.929 [2024-07-12 17:26:01.808997] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.929 [2024-07-12 17:26:01.809013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.929 [2024-07-12 17:26:01.809068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:22.929 [2024-07-12 17:26:01.809080] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:22.929 [2024-07-12 17:26:01.809086] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:22.930 [2024-07-12 17:26:01.809092] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:22.930 [2024-07-12 17:26:01.809097] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:22.930 [2024-07-12 17:26:01.809103] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:22.930 [2024-07-12 17:26:01.809109] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:22.930 [2024-07-12 17:26:01.809115] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809127] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.930 [2024-07-12 17:26:01.809180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.930 [2024-07-12 17:26:01.809191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.930 [2024-07-12 17:26:01.809201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.930 [2024-07-12 17:26:01.809207] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809218] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809258] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:22.930 [2024-07-12 17:26:01.809265] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809273] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809283] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809383] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809393] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809403] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:22.930 [2024-07-12 17:26:01.809408] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:22.930 [2024-07-12 17:26:01.809416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809447] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:22.930 [2024-07-12 17:26:01.809462] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809472] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809480] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.930 [2024-07-12 17:26:01.809486] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.930 [2024-07-12 17:26:01.809494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809534] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809544] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809553] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.930 [2024-07-12 17:26:01.809558] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.930 [2024-07-12 17:26:01.809567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809604] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809614] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809622] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809634] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:22.930 [2024-07-12 17:26:01.809642] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:22.930 [2024-07-12 17:26:01.809649] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:22.930 [2024-07-12 17:26:01.809670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809786] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:22.930 [2024-07-12 17:26:01.809792] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:22.930 [2024-07-12 17:26:01.809796] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:22.930 [2024-07-12 17:26:01.809801] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:22.930 [2024-07-12 17:26:01.809809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:22.930 [2024-07-12 17:26:01.809818] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:22.930 [2024-07-12 17:26:01.809824] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:22.930 [2024-07-12 17:26:01.809831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809840] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:22.930 [2024-07-12 17:26:01.809845] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.930 [2024-07-12 17:26:01.809853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809862] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:22.930 [2024-07-12 17:26:01.809868] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:22.930 [2024-07-12 17:26:01.809875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.809884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.809921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:22.930 ===================================================== 00:15:22.930 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.930 ===================================================== 00:15:22.930 Controller Capabilities/Features 00:15:22.930 ================================ 00:15:22.930 Vendor ID: 4e58 00:15:22.930 Subsystem Vendor ID: 4e58 00:15:22.930 Serial Number: SPDK1 00:15:22.930 Model Number: SPDK bdev Controller 00:15:22.930 Firmware Version: 24.01.1 00:15:22.930 Recommended Arb Burst: 6 00:15:22.930 IEEE OUI Identifier: 8d 6b 50 00:15:22.930 Multi-path I/O 00:15:22.930 May have multiple subsystem ports: Yes 00:15:22.930 May have multiple controllers: Yes 00:15:22.930 Associated with SR-IOV VF: No 00:15:22.930 Max Data Transfer Size: 131072 00:15:22.930 Max Number of Namespaces: 32 00:15:22.930 Max Number of I/O Queues: 127 00:15:22.930 NVMe Specification Version (VS): 1.3 00:15:22.930 NVMe Specification Version (Identify): 1.3 00:15:22.930 Maximum Queue Entries: 256 00:15:22.930 Contiguous Queues Required: Yes 00:15:22.930 Arbitration Mechanisms Supported 00:15:22.930 Weighted Round Robin: Not Supported 00:15:22.930 Vendor Specific: Not Supported 00:15:22.930 Reset Timeout: 15000 ms 00:15:22.930 Doorbell Stride: 4 bytes 00:15:22.930 NVM Subsystem Reset: Not Supported 00:15:22.930 Command Sets Supported 00:15:22.930 NVM Command Set: Supported 00:15:22.930 Boot Partition: Not Supported 00:15:22.930 Memory Page Size Minimum: 4096 bytes 00:15:22.930 Memory Page Size Maximum: 4096 bytes 00:15:22.930 Persistent Memory Region: Not Supported 00:15:22.930 Optional Asynchronous Events Supported 00:15:22.930 Namespace Attribute Notices: Supported 00:15:22.930 Firmware Activation Notices: Not Supported 00:15:22.930 ANA Change Notices: Not Supported 00:15:22.930 PLE Aggregate Log Change Notices: Not Supported 00:15:22.930 LBA Status Info Alert Notices: Not Supported 00:15:22.930 EGE Aggregate Log Change Notices: Not Supported 00:15:22.930 Normal NVM Subsystem Shutdown event: Not Supported 00:15:22.930 Zone Descriptor Change Notices: Not Supported 00:15:22.930 Discovery Log Change Notices: Not Supported 00:15:22.930 Controller Attributes 00:15:22.930 128-bit Host Identifier: Supported 00:15:22.930 Non-Operational Permissive Mode: Not Supported 00:15:22.930 NVM Sets: Not Supported 00:15:22.930 Read Recovery Levels: Not Supported 00:15:22.930 Endurance Groups: Not Supported 00:15:22.930 Predictable Latency Mode: Not Supported 00:15:22.930 Traffic Based Keep ALive: Not Supported 00:15:22.930 Namespace Granularity: Not Supported 00:15:22.930 SQ Associations: Not Supported 00:15:22.930 UUID List: Not Supported 00:15:22.930 Multi-Domain Subsystem: Not Supported 00:15:22.930 Fixed Capacity Management: Not Supported 00:15:22.930 Variable Capacity Management: Not Supported 00:15:22.930 Delete Endurance Group: Not Supported 00:15:22.930 Delete NVM Set: Not Supported 00:15:22.930 Extended LBA Formats Supported: Not Supported 00:15:22.930 Flexible Data Placement Supported: Not Supported 00:15:22.930 00:15:22.930 Controller Memory Buffer Support 00:15:22.930 ================================ 00:15:22.930 Supported: No 00:15:22.930 00:15:22.930 Persistent Memory Region Support 00:15:22.930 ================================ 00:15:22.930 Supported: No 00:15:22.930 00:15:22.930 Admin Command Set Attributes 00:15:22.930 ============================ 00:15:22.930 Security Send/Receive: Not Supported 00:15:22.930 Format NVM: Not Supported 00:15:22.930 Firmware Activate/Download: Not Supported 00:15:22.930 Namespace Management: Not Supported 00:15:22.930 Device Self-Test: Not Supported 00:15:22.930 Directives: Not Supported 00:15:22.930 NVMe-MI: Not Supported 00:15:22.930 Virtualization Management: Not Supported 00:15:22.930 Doorbell Buffer Config: Not Supported 00:15:22.930 Get LBA Status Capability: Not Supported 00:15:22.930 Command & Feature Lockdown Capability: Not Supported 00:15:22.930 Abort Command Limit: 4 00:15:22.930 Async Event Request Limit: 4 00:15:22.930 Number of Firmware Slots: N/A 00:15:22.930 Firmware Slot 1 Read-Only: N/A 00:15:22.930 Firmware Activation Without Reset: N/A 00:15:22.930 Multiple Update Detection Support: N/A 00:15:22.930 Firmware Update Granularity: No Information Provided 00:15:22.930 Per-Namespace SMART Log: No 00:15:22.930 Asymmetric Namespace Access Log Page: Not Supported 00:15:22.930 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:22.930 Command Effects Log Page: Supported 00:15:22.930 Get Log Page Extended Data: Supported 00:15:22.930 Telemetry Log Pages: Not Supported 00:15:22.930 Persistent Event Log Pages: Not Supported 00:15:22.930 Supported Log Pages Log Page: May Support 00:15:22.930 Commands Supported & Effects Log Page: Not Supported 00:15:22.930 Feature Identifiers & Effects Log Page:May Support 00:15:22.930 NVMe-MI Commands & Effects Log Page: May Support 00:15:22.930 Data Area 4 for Telemetry Log: Not Supported 00:15:22.930 Error Log Page Entries Supported: 128 00:15:22.930 Keep Alive: Supported 00:15:22.930 Keep Alive Granularity: 10000 ms 00:15:22.930 00:15:22.930 NVM Command Set Attributes 00:15:22.930 ========================== 00:15:22.930 Submission Queue Entry Size 00:15:22.930 Max: 64 00:15:22.930 Min: 64 00:15:22.930 Completion Queue Entry Size 00:15:22.930 Max: 16 00:15:22.930 Min: 16 00:15:22.930 Number of Namespaces: 32 00:15:22.930 Compare Command: Supported 00:15:22.930 Write Uncorrectable Command: Not Supported 00:15:22.930 Dataset Management Command: Supported 00:15:22.930 Write Zeroes Command: Supported 00:15:22.930 Set Features Save Field: Not Supported 00:15:22.930 Reservations: Not Supported 00:15:22.930 Timestamp: Not Supported 00:15:22.930 Copy: Supported 00:15:22.930 Volatile Write Cache: Present 00:15:22.930 Atomic Write Unit (Normal): 1 00:15:22.930 Atomic Write Unit (PFail): 1 00:15:22.930 Atomic Compare & Write Unit: 1 00:15:22.930 Fused Compare & Write: Supported 00:15:22.930 Scatter-Gather List 00:15:22.930 SGL Command Set: Supported (Dword aligned) 00:15:22.930 SGL Keyed: Not Supported 00:15:22.930 SGL Bit Bucket Descriptor: Not Supported 00:15:22.930 SGL Metadata Pointer: Not Supported 00:15:22.930 Oversized SGL: Not Supported 00:15:22.930 SGL Metadata Address: Not Supported 00:15:22.930 SGL Offset: Not Supported 00:15:22.930 Transport SGL Data Block: Not Supported 00:15:22.930 Replay Protected Memory Block: Not Supported 00:15:22.930 00:15:22.930 Firmware Slot Information 00:15:22.930 ========================= 00:15:22.930 Active slot: 1 00:15:22.930 Slot 1 Firmware Revision: 24.01.1 00:15:22.930 00:15:22.930 00:15:22.930 Commands Supported and Effects 00:15:22.930 ============================== 00:15:22.930 Admin Commands 00:15:22.930 -------------- 00:15:22.930 Get Log Page (02h): Supported 00:15:22.930 Identify (06h): Supported 00:15:22.930 Abort (08h): Supported 00:15:22.930 Set Features (09h): Supported 00:15:22.930 Get Features (0Ah): Supported 00:15:22.930 Asynchronous Event Request (0Ch): Supported 00:15:22.930 Keep Alive (18h): Supported 00:15:22.930 I/O Commands 00:15:22.930 ------------ 00:15:22.930 Flush (00h): Supported LBA-Change 00:15:22.930 Write (01h): Supported LBA-Change 00:15:22.930 Read (02h): Supported 00:15:22.930 Compare (05h): Supported 00:15:22.930 Write Zeroes (08h): Supported LBA-Change 00:15:22.930 Dataset Management (09h): Supported LBA-Change 00:15:22.930 Copy (19h): Supported LBA-Change 00:15:22.930 Unknown (79h): Supported LBA-Change 00:15:22.930 Unknown (7Ah): Supported 00:15:22.930 00:15:22.930 Error Log 00:15:22.930 ========= 00:15:22.930 00:15:22.930 Arbitration 00:15:22.930 =========== 00:15:22.930 Arbitration Burst: 1 00:15:22.930 00:15:22.930 Power Management 00:15:22.930 ================ 00:15:22.930 Number of Power States: 1 00:15:22.930 Current Power State: Power State #0 00:15:22.930 Power State #0: 00:15:22.930 Max Power: 0.00 W 00:15:22.930 Non-Operational State: Operational 00:15:22.930 Entry Latency: Not Reported 00:15:22.930 Exit Latency: Not Reported 00:15:22.930 Relative Read Throughput: 0 00:15:22.930 Relative Read Latency: 0 00:15:22.930 Relative Write Throughput: 0 00:15:22.930 Relative Write Latency: 0 00:15:22.930 Idle Power: Not Reported 00:15:22.930 Active Power: Not Reported 00:15:22.930 Non-Operational Permissive Mode: Not Supported 00:15:22.930 00:15:22.930 Health Information 00:15:22.930 ================== 00:15:22.930 Critical Warnings: 00:15:22.930 Available Spare Space: OK 00:15:22.930 Temperature: OK 00:15:22.930 Device Reliability: OK 00:15:22.930 Read Only: No 00:15:22.930 Volatile Memory Backup: OK 00:15:22.930 Current Temperature: 0 Kelvin[2024-07-12 17:26:01.810047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:22.930 [2024-07-12 17:26:01.810058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.810089] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:22.930 [2024-07-12 17:26:01.810101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.810109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.930 [2024-07-12 17:26:01.810117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.931 [2024-07-12 17:26:01.810125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.931 [2024-07-12 17:26:01.813264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:22.931 [2024-07-12 17:26:01.813278] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:22.931 [2024-07-12 17:26:01.813992] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:22.931 [2024-07-12 17:26:01.814000] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:22.931 [2024-07-12 17:26:01.814963] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:22.931 [2024-07-12 17:26:01.814976] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:22.931 [2024-07-12 17:26:01.815032] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:22.931 [2024-07-12 17:26:01.818262] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.931 (-273 Celsius) 00:15:22.931 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:22.931 Available Spare: 0% 00:15:22.931 Available Spare Threshold: 0% 00:15:22.931 Life Percentage Used: 0% 00:15:22.931 Data Units Read: 0 00:15:22.931 Data Units Written: 0 00:15:22.931 Host Read Commands: 0 00:15:22.931 Host Write Commands: 0 00:15:22.931 Controller Busy Time: 0 minutes 00:15:22.931 Power Cycles: 0 00:15:22.931 Power On Hours: 0 hours 00:15:22.931 Unsafe Shutdowns: 0 00:15:22.931 Unrecoverable Media Errors: 0 00:15:22.931 Lifetime Error Log Entries: 0 00:15:22.931 Warning Temperature Time: 0 minutes 00:15:22.931 Critical Temperature Time: 0 minutes 00:15:22.931 00:15:22.931 Number of Queues 00:15:22.931 ================ 00:15:22.931 Number of I/O Submission Queues: 127 00:15:22.931 Number of I/O Completion Queues: 127 00:15:22.931 00:15:22.931 Active Namespaces 00:15:22.931 ================= 00:15:22.931 Namespace ID:1 00:15:22.931 Error Recovery Timeout: Unlimited 00:15:22.931 Command Set Identifier: NVM (00h) 00:15:22.931 Deallocate: Supported 00:15:22.931 Deallocated/Unwritten Error: Not Supported 00:15:22.931 Deallocated Read Value: Unknown 00:15:22.931 Deallocate in Write Zeroes: Not Supported 00:15:22.931 Deallocated Guard Field: 0xFFFF 00:15:22.931 Flush: Supported 00:15:22.931 Reservation: Supported 00:15:22.931 Namespace Sharing Capabilities: Multiple Controllers 00:15:22.931 Size (in LBAs): 131072 (0GiB) 00:15:22.931 Capacity (in LBAs): 131072 (0GiB) 00:15:22.931 Utilization (in LBAs): 131072 (0GiB) 00:15:22.931 NGUID: AEF9FAAE202C41189FD95F13D12936C0 00:15:22.931 UUID: aef9faae-202c-4118-9fd9-5f13d12936c0 00:15:22.931 Thin Provisioning: Not Supported 00:15:22.931 Per-NS Atomic Units: Yes 00:15:22.931 Atomic Boundary Size (Normal): 0 00:15:22.931 Atomic Boundary Size (PFail): 0 00:15:22.931 Atomic Boundary Offset: 0 00:15:22.931 Maximum Single Source Range Length: 65535 00:15:22.931 Maximum Copy Length: 65535 00:15:22.931 Maximum Source Range Count: 1 00:15:22.931 NGUID/EUI64 Never Reused: No 00:15:22.931 Namespace Write Protected: No 00:15:22.931 Number of LBA Formats: 1 00:15:22.931 Current LBA Format: LBA Format #00 00:15:22.931 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:22.931 00:15:22.931 17:26:01 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:23.189 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.459 Initializing NVMe Controllers 00:15:28.459 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:28.459 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:28.459 Initialization complete. Launching workers. 00:15:28.459 ======================================================== 00:15:28.459 Latency(us) 00:15:28.459 Device Information : IOPS MiB/s Average min max 00:15:28.459 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24266.92 94.79 5274.90 1437.18 9007.16 00:15:28.459 ======================================================== 00:15:28.459 Total : 24266.92 94.79 5274.90 1437.18 9007.16 00:15:28.459 00:15:28.459 17:26:07 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:28.459 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.731 Initializing NVMe Controllers 00:15:33.731 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:33.731 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:33.731 Initialization complete. Launching workers. 00:15:33.731 ======================================================== 00:15:33.731 Latency(us) 00:15:33.731 Device Information : IOPS MiB/s Average min max 00:15:33.731 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16053.83 62.71 7978.20 7157.73 8040.43 00:15:33.731 ======================================================== 00:15:33.731 Total : 16053.83 62.71 7978.20 7157.73 8040.43 00:15:33.731 00:15:33.731 17:26:12 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:33.731 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.019 Initializing NVMe Controllers 00:15:39.019 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:39.019 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:39.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:39.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:39.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:39.019 Initialization complete. Launching workers. 00:15:39.019 Starting thread on core 2 00:15:39.019 Starting thread on core 3 00:15:39.019 Starting thread on core 1 00:15:39.019 17:26:17 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:39.019 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.308 Initializing NVMe Controllers 00:15:42.308 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.308 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.308 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:42.308 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:42.308 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:42.308 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:42.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:42.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:42.308 Initialization complete. Launching workers. 00:15:42.308 Starting thread on core 1 with urgent priority queue 00:15:42.308 Starting thread on core 2 with urgent priority queue 00:15:42.308 Starting thread on core 3 with urgent priority queue 00:15:42.308 Starting thread on core 0 with urgent priority queue 00:15:42.308 SPDK bdev Controller (SPDK1 ) core 0: 8066.67 IO/s 12.40 secs/100000 ios 00:15:42.308 SPDK bdev Controller (SPDK1 ) core 1: 8714.67 IO/s 11.47 secs/100000 ios 00:15:42.308 SPDK bdev Controller (SPDK1 ) core 2: 8259.00 IO/s 12.11 secs/100000 ios 00:15:42.308 SPDK bdev Controller (SPDK1 ) core 3: 7780.00 IO/s 12.85 secs/100000 ios 00:15:42.308 ======================================================== 00:15:42.308 00:15:42.308 17:26:21 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:42.308 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.567 Initializing NVMe Controllers 00:15:42.567 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.567 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.567 Namespace ID: 1 size: 0GB 00:15:42.567 Initialization complete. 00:15:42.567 INFO: using host memory buffer for IO 00:15:42.567 Hello world! 00:15:42.567 17:26:21 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:42.826 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.206 Initializing NVMe Controllers 00:15:44.206 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:44.206 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:44.206 Initialization complete. Launching workers. 00:15:44.206 submit (in ns) avg, min, max = 8959.1, 4450.9, 4000586.4 00:15:44.206 complete (in ns) avg, min, max = 20494.2, 2680.0, 4997079.1 00:15:44.206 00:15:44.206 Submit histogram 00:15:44.206 ================ 00:15:44.206 Range in us Cumulative Count 00:15:44.206 4.451 - 4.480: 0.1824% ( 31) 00:15:44.206 4.480 - 4.509: 1.8708% ( 287) 00:15:44.206 4.509 - 4.538: 4.2417% ( 403) 00:15:44.206 4.538 - 4.567: 6.8596% ( 445) 00:15:44.206 4.567 - 4.596: 13.4075% ( 1113) 00:15:44.206 4.596 - 4.625: 23.4439% ( 1706) 00:15:44.206 4.625 - 4.655: 34.6982% ( 1913) 00:15:44.206 4.655 - 4.684: 45.9583% ( 1914) 00:15:44.206 4.684 - 4.713: 57.3656% ( 1939) 00:15:44.206 4.713 - 4.742: 68.7787% ( 1940) 00:15:44.206 4.742 - 4.771: 76.9032% ( 1381) 00:15:44.206 4.771 - 4.800: 82.1861% ( 898) 00:15:44.206 4.800 - 4.829: 85.5807% ( 577) 00:15:44.206 4.829 - 4.858: 87.4809% ( 323) 00:15:44.206 4.858 - 4.887: 89.1517% ( 284) 00:15:44.206 4.887 - 4.916: 90.7872% ( 278) 00:15:44.206 4.916 - 4.945: 92.5050% ( 292) 00:15:44.206 4.945 - 4.975: 94.5052% ( 340) 00:15:44.206 4.975 - 5.004: 96.1878% ( 286) 00:15:44.206 5.004 - 5.033: 97.3997% ( 206) 00:15:44.206 5.033 - 5.062: 98.2645% ( 147) 00:15:44.206 5.062 - 5.091: 98.8234% ( 95) 00:15:44.206 5.091 - 5.120: 99.2411% ( 71) 00:15:44.206 5.120 - 5.149: 99.4058% ( 28) 00:15:44.206 5.149 - 5.178: 99.4764% ( 12) 00:15:44.206 5.178 - 5.207: 99.4999% ( 4) 00:15:44.206 5.236 - 5.265: 99.5058% ( 1) 00:15:44.206 6.924 - 6.953: 99.5117% ( 1) 00:15:44.206 7.011 - 7.040: 99.5176% ( 1) 00:15:44.206 7.069 - 7.098: 99.5235% ( 1) 00:15:44.206 7.185 - 7.215: 99.5294% ( 1) 00:15:44.206 7.215 - 7.244: 99.5352% ( 1) 00:15:44.206 7.302 - 7.331: 99.5470% ( 2) 00:15:44.206 7.331 - 7.360: 99.5529% ( 1) 00:15:44.206 7.447 - 7.505: 99.5705% ( 3) 00:15:44.206 7.505 - 7.564: 99.5764% ( 1) 00:15:44.206 7.564 - 7.622: 99.5941% ( 3) 00:15:44.206 7.622 - 7.680: 99.6117% ( 3) 00:15:44.206 7.738 - 7.796: 99.6294% ( 3) 00:15:44.206 7.796 - 7.855: 99.6529% ( 4) 00:15:44.206 7.855 - 7.913: 99.6588% ( 1) 00:15:44.206 7.913 - 7.971: 99.6764% ( 3) 00:15:44.206 7.971 - 8.029: 99.7000% ( 4) 00:15:44.206 8.029 - 8.087: 99.7117% ( 2) 00:15:44.206 8.087 - 8.145: 99.7294% ( 3) 00:15:44.206 8.145 - 8.204: 99.7470% ( 3) 00:15:44.206 8.204 - 8.262: 99.7529% ( 1) 00:15:44.206 8.262 - 8.320: 99.7588% ( 1) 00:15:44.206 8.378 - 8.436: 99.7647% ( 1) 00:15:44.206 8.495 - 8.553: 99.7764% ( 2) 00:15:44.206 8.553 - 8.611: 99.7823% ( 1) 00:15:44.206 8.611 - 8.669: 99.7882% ( 1) 00:15:44.206 8.669 - 8.727: 99.8000% ( 2) 00:15:44.206 8.727 - 8.785: 99.8117% ( 2) 00:15:44.206 8.844 - 8.902: 99.8176% ( 1) 00:15:44.206 8.902 - 8.960: 99.8353% ( 3) 00:15:44.206 9.135 - 9.193: 99.8412% ( 1) 00:15:44.206 9.309 - 9.367: 99.8470% ( 1) 00:15:44.206 9.425 - 9.484: 99.8529% ( 1) 00:15:44.206 9.775 - 9.833: 99.8647% ( 2) 00:15:44.206 13.149 - 13.207: 99.8706% ( 1) 00:15:44.206 13.556 - 13.615: 99.8765% ( 1) 00:15:44.206 13.964 - 14.022: 99.8823% ( 1) 00:15:44.206 15.709 - 15.825: 99.8882% ( 1) 00:15:44.206 56.785 - 57.018: 99.8941% ( 1) 00:15:44.206 3991.738 - 4021.527: 100.0000% ( 18) 00:15:44.206 00:15:44.206 Complete histogram 00:15:44.206 ================== 00:15:44.206 Range in us Cumulative Count 00:15:44.206 2.676 - 2.691: 0.0765% ( 13) 00:15:44.206 2.691 - 2.705: 4.9535% ( 829) 00:15:44.206 2.705 - 2.720: 31.3566% ( 4488) 00:15:44.206 2.720 - 2.735: 57.9774% ( 4525) 00:15:44.206 2.735 - 2.749: 65.6313% ( 1301) 00:15:44.206 2.749 - 2.764: 71.6202% ( 1018) 00:15:44.206 2.764 - 2.778: 84.1981% ( 2138) 00:15:44.206 2.778 - 2.793: 93.2757% ( 1543) 00:15:44.206 2.793 - 2.807: 96.1878% ( 495) 00:15:44.206 2.807 - 2.822: 97.8527% ( 283) 00:15:44.206 2.822 - 2.836: 98.6175% ( 130) 00:15:44.206 2.836 - 2.851: 98.9293% ( 53) 00:15:44.206 2.851 - 2.865: 99.1293% ( 34) 00:15:44.206 2.865 - 2.880: 99.2234% ( 16) 00:15:44.206 2.880 - 2.895: 99.2529% ( 5) 00:15:44.206 2.895 - 2.909: 99.2646% ( 2) 00:15:44.206 2.909 - 2.924: 99.2764% ( 2) 00:15:44.206 2.924 - 2.938: 99.2823% ( 1) 00:15:44.206 2.953 - 2.967: 99.2882% ( 1) 00:15:44.206 3.069 - 3.084: 99.2940% ( 1) 00:15:44.206 5.207 - 5.236: 99.2999% ( 1) 00:15:44.206 5.236 - 5.265: 99.3058% ( 1) 00:15:44.206 5.295 - 5.324: 99.3117% ( 1) 00:15:44.206 5.411 - 5.440: 99.3293% ( 3) 00:15:44.206 5.440 - 5.469: 99.3411% ( 2) 00:15:44.206 5.527 - 5.556: 99.3470% ( 1) 00:15:44.206 5.556 - 5.585: 99.3529% ( 1) 00:15:44.206 5.585 - 5.615: 99.3646% ( 2) 00:15:44.206 5.731 - 5.760: 99.3705% ( 1) 00:15:44.206 5.818 - 5.847: 99.3764% ( 1) 00:15:44.206 5.847 - 5.876: 99.3823% ( 1) 00:15:44.206 5.905 - 5.935: 99.3940% ( 2) 00:15:44.206 6.022 - 6.051: 99.3999% ( 1) 00:15:44.206 6.109 - 6.138: 99.4058% ( 1) 00:15:44.206 6.138 - 6.167: 99.4117% ( 1) 00:15:44.206 6.167 - 6.196: 99.4235% ( 2) 00:15:44.206 6.196 - 6.225: 99.4293% ( 1) 00:15:44.206 6.225 - 6.255: 99.4352% ( 1) 00:15:44.206 6.284 - 6.313: 99.4411% ( 1) 00:15:44.206 6.313 - 6.342: 99.4529% ( 2) 00:15:44.206 6.371 - 6.400: 99.4646% ( 2) 00:15:44.206 6.400 - 6.429: 99.4705% ( 1) 00:15:44.206 6.458 - 6.487: 99.4764% ( 1) 00:15:44.206 6.545 - 6.575: 99.4823% ( 1) 00:15:44.206 6.575 - 6.604: 99.4882% ( 1) 00:15:44.206 6.691 - 6.720: 99.4941% ( 1) 00:15:44.206 6.749 - 6.778: 99.4999% ( 1) 00:15:44.206 6.778 - 6.807: 99.5058% ( 1) 00:15:44.206 6.982 - 7.011: 99.5117% ( 1) 00:15:44.206 7.069 - 7.098: 99.5235% ( 2) 00:15:44.206 7.273 - 7.302: 99.5294% ( 1) 00:15:44.206 7.447 - 7.505: 99.5352% ( 1) 00:15:44.206 7.564 - 7.622: 99.5411% ( 1) 00:15:44.206 8.436 - 8.495: 99.5470% ( 1) 00:15:44.206 9.658 - 9.716: 99.5529% ( 1) 00:15:44.206 11.287 - 11.345: 99.5588% ( 1) 00:15:44.206 3991.738 - 4021.527: 99.9882% ( 73) 00:15:44.206 4974.778 - 5004.567: 100.0000% ( 2) 00:15:44.206 00:15:44.206 17:26:22 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:44.206 17:26:22 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:44.206 17:26:22 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:44.206 17:26:22 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:44.206 17:26:22 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:44.207 [2024-07-12 17:26:23.071356] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:44.207 [ 00:15:44.207 { 00:15:44.207 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:44.207 "subtype": "Discovery", 00:15:44.207 "listen_addresses": [], 00:15:44.207 "allow_any_host": true, 00:15:44.207 "hosts": [] 00:15:44.207 }, 00:15:44.207 { 00:15:44.207 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:44.207 "subtype": "NVMe", 00:15:44.207 "listen_addresses": [ 00:15:44.207 { 00:15:44.207 "transport": "VFIOUSER", 00:15:44.207 "trtype": "VFIOUSER", 00:15:44.207 "adrfam": "IPv4", 00:15:44.207 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:44.207 "trsvcid": "0" 00:15:44.207 } 00:15:44.207 ], 00:15:44.207 "allow_any_host": true, 00:15:44.207 "hosts": [], 00:15:44.207 "serial_number": "SPDK1", 00:15:44.207 "model_number": "SPDK bdev Controller", 00:15:44.207 "max_namespaces": 32, 00:15:44.207 "min_cntlid": 1, 00:15:44.207 "max_cntlid": 65519, 00:15:44.207 "namespaces": [ 00:15:44.207 { 00:15:44.207 "nsid": 1, 00:15:44.207 "bdev_name": "Malloc1", 00:15:44.207 "name": "Malloc1", 00:15:44.207 "nguid": "AEF9FAAE202C41189FD95F13D12936C0", 00:15:44.207 "uuid": "aef9faae-202c-4118-9fd9-5f13d12936c0" 00:15:44.207 } 00:15:44.207 ] 00:15:44.207 }, 00:15:44.207 { 00:15:44.207 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:44.207 "subtype": "NVMe", 00:15:44.207 "listen_addresses": [ 00:15:44.207 { 00:15:44.207 "transport": "VFIOUSER", 00:15:44.207 "trtype": "VFIOUSER", 00:15:44.207 "adrfam": "IPv4", 00:15:44.207 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:44.207 "trsvcid": "0" 00:15:44.207 } 00:15:44.207 ], 00:15:44.207 "allow_any_host": true, 00:15:44.207 "hosts": [], 00:15:44.207 "serial_number": "SPDK2", 00:15:44.207 "model_number": "SPDK bdev Controller", 00:15:44.207 "max_namespaces": 32, 00:15:44.207 "min_cntlid": 1, 00:15:44.207 "max_cntlid": 65519, 00:15:44.207 "namespaces": [ 00:15:44.207 { 00:15:44.207 "nsid": 1, 00:15:44.207 "bdev_name": "Malloc2", 00:15:44.207 "name": "Malloc2", 00:15:44.207 "nguid": "E60DE16C88C547909311BB1D7B925C81", 00:15:44.207 "uuid": "e60de16c-88c5-4790-9311-bb1d7b925c81" 00:15:44.207 } 00:15:44.207 ] 00:15:44.207 } 00:15:44.207 ] 00:15:44.207 17:26:23 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:44.207 17:26:23 -- target/nvmf_vfio_user.sh@34 -- # aerpid=4069514 00:15:44.207 17:26:23 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:44.207 17:26:23 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:44.207 17:26:23 -- common/autotest_common.sh@1244 -- # local i=0 00:15:44.207 17:26:23 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:44.207 17:26:23 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:44.207 17:26:23 -- common/autotest_common.sh@1255 -- # return 0 00:15:44.207 17:26:23 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:44.207 17:26:23 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:44.207 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.466 Malloc3 00:15:44.466 17:26:23 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:44.726 17:26:23 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:44.726 Asynchronous Event Request test 00:15:44.726 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:44.726 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:44.726 Registering asynchronous event callbacks... 00:15:44.726 Starting namespace attribute notice tests for all controllers... 00:15:44.726 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:44.726 aer_cb - Changed Namespace 00:15:44.726 Cleaning up... 00:15:44.985 [ 00:15:44.985 { 00:15:44.985 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:44.985 "subtype": "Discovery", 00:15:44.985 "listen_addresses": [], 00:15:44.985 "allow_any_host": true, 00:15:44.985 "hosts": [] 00:15:44.985 }, 00:15:44.985 { 00:15:44.985 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:44.985 "subtype": "NVMe", 00:15:44.985 "listen_addresses": [ 00:15:44.985 { 00:15:44.985 "transport": "VFIOUSER", 00:15:44.985 "trtype": "VFIOUSER", 00:15:44.985 "adrfam": "IPv4", 00:15:44.985 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:44.985 "trsvcid": "0" 00:15:44.985 } 00:15:44.985 ], 00:15:44.985 "allow_any_host": true, 00:15:44.985 "hosts": [], 00:15:44.985 "serial_number": "SPDK1", 00:15:44.985 "model_number": "SPDK bdev Controller", 00:15:44.985 "max_namespaces": 32, 00:15:44.985 "min_cntlid": 1, 00:15:44.985 "max_cntlid": 65519, 00:15:44.985 "namespaces": [ 00:15:44.985 { 00:15:44.985 "nsid": 1, 00:15:44.985 "bdev_name": "Malloc1", 00:15:44.985 "name": "Malloc1", 00:15:44.985 "nguid": "AEF9FAAE202C41189FD95F13D12936C0", 00:15:44.985 "uuid": "aef9faae-202c-4118-9fd9-5f13d12936c0" 00:15:44.985 }, 00:15:44.985 { 00:15:44.985 "nsid": 2, 00:15:44.985 "bdev_name": "Malloc3", 00:15:44.985 "name": "Malloc3", 00:15:44.985 "nguid": "CB91E30F1FC2444589394079A3E8BD99", 00:15:44.985 "uuid": "cb91e30f-1fc2-4445-8939-4079a3e8bd99" 00:15:44.985 } 00:15:44.985 ] 00:15:44.985 }, 00:15:44.985 { 00:15:44.985 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:44.985 "subtype": "NVMe", 00:15:44.985 "listen_addresses": [ 00:15:44.985 { 00:15:44.985 "transport": "VFIOUSER", 00:15:44.985 "trtype": "VFIOUSER", 00:15:44.985 "adrfam": "IPv4", 00:15:44.985 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:44.985 "trsvcid": "0" 00:15:44.986 } 00:15:44.986 ], 00:15:44.986 "allow_any_host": true, 00:15:44.986 "hosts": [], 00:15:44.986 "serial_number": "SPDK2", 00:15:44.986 "model_number": "SPDK bdev Controller", 00:15:44.986 "max_namespaces": 32, 00:15:44.986 "min_cntlid": 1, 00:15:44.986 "max_cntlid": 65519, 00:15:44.986 "namespaces": [ 00:15:44.986 { 00:15:44.986 "nsid": 1, 00:15:44.986 "bdev_name": "Malloc2", 00:15:44.986 "name": "Malloc2", 00:15:44.986 "nguid": "E60DE16C88C547909311BB1D7B925C81", 00:15:44.986 "uuid": "e60de16c-88c5-4790-9311-bb1d7b925c81" 00:15:44.986 } 00:15:44.986 ] 00:15:44.986 } 00:15:44.986 ] 00:15:44.986 17:26:23 -- target/nvmf_vfio_user.sh@44 -- # wait 4069514 00:15:44.986 17:26:23 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.986 17:26:23 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:44.986 17:26:23 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:44.986 17:26:23 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:44.986 [2024-07-12 17:26:23.830530] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:44.986 [2024-07-12 17:26:23.830569] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069584 ] 00:15:44.986 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.986 [2024-07-12 17:26:23.868590] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:44.986 [2024-07-12 17:26:23.878508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:44.986 [2024-07-12 17:26:23.878533] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fefb2aab000 00:15:44.986 [2024-07-12 17:26:23.879515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.986 [2024-07-12 17:26:23.880526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.986 [2024-07-12 17:26:23.881532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.986 [2024-07-12 17:26:23.882538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.986 [2024-07-12 17:26:23.883542] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.986 [2024-07-12 17:26:23.884551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.986 [2024-07-12 17:26:23.885557] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.986 [2024-07-12 17:26:23.886562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.986 [2024-07-12 17:26:23.887570] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:44.986 [2024-07-12 17:26:23.887584] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fefb1871000 00:15:44.986 [2024-07-12 17:26:23.888991] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:44.986 [2024-07-12 17:26:23.908746] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:44.986 [2024-07-12 17:26:23.908776] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:44.986 [2024-07-12 17:26:23.910851] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:44.986 [2024-07-12 17:26:23.910899] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:44.986 [2024-07-12 17:26:23.910995] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:44.986 [2024-07-12 17:26:23.911013] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:44.986 [2024-07-12 17:26:23.911021] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:44.986 [2024-07-12 17:26:23.911859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:44.986 [2024-07-12 17:26:23.911872] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:44.986 [2024-07-12 17:26:23.911881] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:44.986 [2024-07-12 17:26:23.912890] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:44.986 [2024-07-12 17:26:23.912902] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:44.986 [2024-07-12 17:26:23.912912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:44.986 [2024-07-12 17:26:23.913899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:44.986 [2024-07-12 17:26:23.913911] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:44.986 [2024-07-12 17:26:23.914904] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:44.986 [2024-07-12 17:26:23.914916] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:44.986 [2024-07-12 17:26:23.914923] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:44.986 [2024-07-12 17:26:23.914932] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:44.986 [2024-07-12 17:26:23.915042] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:44.986 [2024-07-12 17:26:23.915048] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:44.986 [2024-07-12 17:26:23.915055] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:44.986 [2024-07-12 17:26:23.915918] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:44.986 [2024-07-12 17:26:23.916922] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:44.986 [2024-07-12 17:26:23.917931] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:44.986 [2024-07-12 17:26:23.918959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:44.986 [2024-07-12 17:26:23.919955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:44.986 [2024-07-12 17:26:23.919967] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:44.986 [2024-07-12 17:26:23.919973] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.919998] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:44.986 [2024-07-12 17:26:23.920008] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.920021] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.986 [2024-07-12 17:26:23.920028] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.986 [2024-07-12 17:26:23.920042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.986 [2024-07-12 17:26:23.930269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:44.986 [2024-07-12 17:26:23.930285] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:44.986 [2024-07-12 17:26:23.930291] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:44.986 [2024-07-12 17:26:23.930296] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:44.986 [2024-07-12 17:26:23.930302] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:44.986 [2024-07-12 17:26:23.930309] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:44.986 [2024-07-12 17:26:23.930314] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:44.986 [2024-07-12 17:26:23.930321] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.930333] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.930346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:44.986 [2024-07-12 17:26:23.938261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:44.986 [2024-07-12 17:26:23.938280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.986 [2024-07-12 17:26:23.938292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.986 [2024-07-12 17:26:23.938302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.986 [2024-07-12 17:26:23.938312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.986 [2024-07-12 17:26:23.938319] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.938329] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.938341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:44.986 [2024-07-12 17:26:23.946262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:44.986 [2024-07-12 17:26:23.946272] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:44.986 [2024-07-12 17:26:23.946279] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.946287] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.946297] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:44.986 [2024-07-12 17:26:23.946309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:23.954262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:23.954339] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.954349] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.954359] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:45.246 [2024-07-12 17:26:23.954366] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:45.246 [2024-07-12 17:26:23.954375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:23.962261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:23.962278] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:45.246 [2024-07-12 17:26:23.962291] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.962301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.962311] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.246 [2024-07-12 17:26:23.962319] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.246 [2024-07-12 17:26:23.962328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:23.970262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:23.970280] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.970291] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.970300] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.246 [2024-07-12 17:26:23.970306] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.246 [2024-07-12 17:26:23.970313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:23.978263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:23.978275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.978284] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.978294] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.978301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.978308] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.978314] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:45.246 [2024-07-12 17:26:23.978320] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:45.246 [2024-07-12 17:26:23.978327] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:45.246 [2024-07-12 17:26:23.978346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:23.986261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:23.986279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:23.994262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:23.994280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:24.002264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:24.002281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:24.010262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:24.010281] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:45.246 [2024-07-12 17:26:24.010288] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:45.246 [2024-07-12 17:26:24.010293] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:45.246 [2024-07-12 17:26:24.010297] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:45.246 [2024-07-12 17:26:24.010306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:45.246 [2024-07-12 17:26:24.010315] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:45.246 [2024-07-12 17:26:24.010321] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:45.246 [2024-07-12 17:26:24.010329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:24.010338] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:45.246 [2024-07-12 17:26:24.010343] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.246 [2024-07-12 17:26:24.010351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:24.010360] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:45.246 [2024-07-12 17:26:24.010366] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:45.246 [2024-07-12 17:26:24.010373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:45.246 [2024-07-12 17:26:24.018263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:24.018292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:24.018304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:45.246 [2024-07-12 17:26:24.018313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:45.246 ===================================================== 00:15:45.246 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.246 ===================================================== 00:15:45.246 Controller Capabilities/Features 00:15:45.246 ================================ 00:15:45.246 Vendor ID: 4e58 00:15:45.246 Subsystem Vendor ID: 4e58 00:15:45.246 Serial Number: SPDK2 00:15:45.246 Model Number: SPDK bdev Controller 00:15:45.246 Firmware Version: 24.01.1 00:15:45.246 Recommended Arb Burst: 6 00:15:45.246 IEEE OUI Identifier: 8d 6b 50 00:15:45.246 Multi-path I/O 00:15:45.246 May have multiple subsystem ports: Yes 00:15:45.246 May have multiple controllers: Yes 00:15:45.246 Associated with SR-IOV VF: No 00:15:45.246 Max Data Transfer Size: 131072 00:15:45.246 Max Number of Namespaces: 32 00:15:45.246 Max Number of I/O Queues: 127 00:15:45.246 NVMe Specification Version (VS): 1.3 00:15:45.246 NVMe Specification Version (Identify): 1.3 00:15:45.246 Maximum Queue Entries: 256 00:15:45.246 Contiguous Queues Required: Yes 00:15:45.246 Arbitration Mechanisms Supported 00:15:45.246 Weighted Round Robin: Not Supported 00:15:45.246 Vendor Specific: Not Supported 00:15:45.246 Reset Timeout: 15000 ms 00:15:45.246 Doorbell Stride: 4 bytes 00:15:45.246 NVM Subsystem Reset: Not Supported 00:15:45.246 Command Sets Supported 00:15:45.246 NVM Command Set: Supported 00:15:45.246 Boot Partition: Not Supported 00:15:45.246 Memory Page Size Minimum: 4096 bytes 00:15:45.246 Memory Page Size Maximum: 4096 bytes 00:15:45.246 Persistent Memory Region: Not Supported 00:15:45.246 Optional Asynchronous Events Supported 00:15:45.246 Namespace Attribute Notices: Supported 00:15:45.246 Firmware Activation Notices: Not Supported 00:15:45.246 ANA Change Notices: Not Supported 00:15:45.246 PLE Aggregate Log Change Notices: Not Supported 00:15:45.246 LBA Status Info Alert Notices: Not Supported 00:15:45.246 EGE Aggregate Log Change Notices: Not Supported 00:15:45.246 Normal NVM Subsystem Shutdown event: Not Supported 00:15:45.246 Zone Descriptor Change Notices: Not Supported 00:15:45.246 Discovery Log Change Notices: Not Supported 00:15:45.246 Controller Attributes 00:15:45.246 128-bit Host Identifier: Supported 00:15:45.246 Non-Operational Permissive Mode: Not Supported 00:15:45.246 NVM Sets: Not Supported 00:15:45.247 Read Recovery Levels: Not Supported 00:15:45.247 Endurance Groups: Not Supported 00:15:45.247 Predictable Latency Mode: Not Supported 00:15:45.247 Traffic Based Keep ALive: Not Supported 00:15:45.247 Namespace Granularity: Not Supported 00:15:45.247 SQ Associations: Not Supported 00:15:45.247 UUID List: Not Supported 00:15:45.247 Multi-Domain Subsystem: Not Supported 00:15:45.247 Fixed Capacity Management: Not Supported 00:15:45.247 Variable Capacity Management: Not Supported 00:15:45.247 Delete Endurance Group: Not Supported 00:15:45.247 Delete NVM Set: Not Supported 00:15:45.247 Extended LBA Formats Supported: Not Supported 00:15:45.247 Flexible Data Placement Supported: Not Supported 00:15:45.247 00:15:45.247 Controller Memory Buffer Support 00:15:45.247 ================================ 00:15:45.247 Supported: No 00:15:45.247 00:15:45.247 Persistent Memory Region Support 00:15:45.247 ================================ 00:15:45.247 Supported: No 00:15:45.247 00:15:45.247 Admin Command Set Attributes 00:15:45.247 ============================ 00:15:45.247 Security Send/Receive: Not Supported 00:15:45.247 Format NVM: Not Supported 00:15:45.247 Firmware Activate/Download: Not Supported 00:15:45.247 Namespace Management: Not Supported 00:15:45.247 Device Self-Test: Not Supported 00:15:45.247 Directives: Not Supported 00:15:45.247 NVMe-MI: Not Supported 00:15:45.247 Virtualization Management: Not Supported 00:15:45.247 Doorbell Buffer Config: Not Supported 00:15:45.247 Get LBA Status Capability: Not Supported 00:15:45.247 Command & Feature Lockdown Capability: Not Supported 00:15:45.247 Abort Command Limit: 4 00:15:45.247 Async Event Request Limit: 4 00:15:45.247 Number of Firmware Slots: N/A 00:15:45.247 Firmware Slot 1 Read-Only: N/A 00:15:45.247 Firmware Activation Without Reset: N/A 00:15:45.247 Multiple Update Detection Support: N/A 00:15:45.247 Firmware Update Granularity: No Information Provided 00:15:45.247 Per-Namespace SMART Log: No 00:15:45.247 Asymmetric Namespace Access Log Page: Not Supported 00:15:45.247 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:45.247 Command Effects Log Page: Supported 00:15:45.247 Get Log Page Extended Data: Supported 00:15:45.247 Telemetry Log Pages: Not Supported 00:15:45.247 Persistent Event Log Pages: Not Supported 00:15:45.247 Supported Log Pages Log Page: May Support 00:15:45.247 Commands Supported & Effects Log Page: Not Supported 00:15:45.247 Feature Identifiers & Effects Log Page:May Support 00:15:45.247 NVMe-MI Commands & Effects Log Page: May Support 00:15:45.247 Data Area 4 for Telemetry Log: Not Supported 00:15:45.247 Error Log Page Entries Supported: 128 00:15:45.247 Keep Alive: Supported 00:15:45.247 Keep Alive Granularity: 10000 ms 00:15:45.247 00:15:45.247 NVM Command Set Attributes 00:15:45.247 ========================== 00:15:45.247 Submission Queue Entry Size 00:15:45.247 Max: 64 00:15:45.247 Min: 64 00:15:45.247 Completion Queue Entry Size 00:15:45.247 Max: 16 00:15:45.247 Min: 16 00:15:45.247 Number of Namespaces: 32 00:15:45.247 Compare Command: Supported 00:15:45.247 Write Uncorrectable Command: Not Supported 00:15:45.247 Dataset Management Command: Supported 00:15:45.247 Write Zeroes Command: Supported 00:15:45.247 Set Features Save Field: Not Supported 00:15:45.247 Reservations: Not Supported 00:15:45.247 Timestamp: Not Supported 00:15:45.247 Copy: Supported 00:15:45.247 Volatile Write Cache: Present 00:15:45.247 Atomic Write Unit (Normal): 1 00:15:45.247 Atomic Write Unit (PFail): 1 00:15:45.247 Atomic Compare & Write Unit: 1 00:15:45.247 Fused Compare & Write: Supported 00:15:45.247 Scatter-Gather List 00:15:45.247 SGL Command Set: Supported (Dword aligned) 00:15:45.247 SGL Keyed: Not Supported 00:15:45.247 SGL Bit Bucket Descriptor: Not Supported 00:15:45.247 SGL Metadata Pointer: Not Supported 00:15:45.247 Oversized SGL: Not Supported 00:15:45.247 SGL Metadata Address: Not Supported 00:15:45.247 SGL Offset: Not Supported 00:15:45.247 Transport SGL Data Block: Not Supported 00:15:45.247 Replay Protected Memory Block: Not Supported 00:15:45.247 00:15:45.247 Firmware Slot Information 00:15:45.247 ========================= 00:15:45.247 Active slot: 1 00:15:45.247 Slot 1 Firmware Revision: 24.01.1 00:15:45.247 00:15:45.247 00:15:45.247 Commands Supported and Effects 00:15:45.247 ============================== 00:15:45.247 Admin Commands 00:15:45.247 -------------- 00:15:45.247 Get Log Page (02h): Supported 00:15:45.247 Identify (06h): Supported 00:15:45.247 Abort (08h): Supported 00:15:45.247 Set Features (09h): Supported 00:15:45.247 Get Features (0Ah): Supported 00:15:45.247 Asynchronous Event Request (0Ch): Supported 00:15:45.247 Keep Alive (18h): Supported 00:15:45.247 I/O Commands 00:15:45.247 ------------ 00:15:45.247 Flush (00h): Supported LBA-Change 00:15:45.247 Write (01h): Supported LBA-Change 00:15:45.247 Read (02h): Supported 00:15:45.247 Compare (05h): Supported 00:15:45.247 Write Zeroes (08h): Supported LBA-Change 00:15:45.247 Dataset Management (09h): Supported LBA-Change 00:15:45.247 Copy (19h): Supported LBA-Change 00:15:45.247 Unknown (79h): Supported LBA-Change 00:15:45.247 Unknown (7Ah): Supported 00:15:45.247 00:15:45.247 Error Log 00:15:45.247 ========= 00:15:45.247 00:15:45.247 Arbitration 00:15:45.247 =========== 00:15:45.247 Arbitration Burst: 1 00:15:45.247 00:15:45.247 Power Management 00:15:45.247 ================ 00:15:45.247 Number of Power States: 1 00:15:45.247 Current Power State: Power State #0 00:15:45.247 Power State #0: 00:15:45.247 Max Power: 0.00 W 00:15:45.247 Non-Operational State: Operational 00:15:45.247 Entry Latency: Not Reported 00:15:45.247 Exit Latency: Not Reported 00:15:45.247 Relative Read Throughput: 0 00:15:45.247 Relative Read Latency: 0 00:15:45.247 Relative Write Throughput: 0 00:15:45.247 Relative Write Latency: 0 00:15:45.247 Idle Power: Not Reported 00:15:45.247 Active Power: Not Reported 00:15:45.247 Non-Operational Permissive Mode: Not Supported 00:15:45.247 00:15:45.247 Health Information 00:15:45.247 ================== 00:15:45.247 Critical Warnings: 00:15:45.247 Available Spare Space: OK 00:15:45.247 Temperature: OK 00:15:45.247 Device Reliability: OK 00:15:45.247 Read Only: No 00:15:45.247 Volatile Memory Backup: OK 00:15:45.247 Current Temperature: 0 Kelvin[2024-07-12 17:26:24.018434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:45.247 [2024-07-12 17:26:24.026263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:45.247 [2024-07-12 17:26:24.026298] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:45.247 [2024-07-12 17:26:24.026310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.247 [2024-07-12 17:26:24.026318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.247 [2024-07-12 17:26:24.026327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.247 [2024-07-12 17:26:24.026335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.247 [2024-07-12 17:26:24.026382] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:45.247 [2024-07-12 17:26:24.026395] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:45.247 [2024-07-12 17:26:24.027424] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:45.247 [2024-07-12 17:26:24.027437] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:45.247 [2024-07-12 17:26:24.028392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:45.247 [2024-07-12 17:26:24.028408] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:45.247 [2024-07-12 17:26:24.028462] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:45.247 [2024-07-12 17:26:24.029921] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:45.247 (-273 Celsius) 00:15:45.247 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:45.247 Available Spare: 0% 00:15:45.247 Available Spare Threshold: 0% 00:15:45.247 Life Percentage Used: 0% 00:15:45.247 Data Units Read: 0 00:15:45.247 Data Units Written: 0 00:15:45.247 Host Read Commands: 0 00:15:45.247 Host Write Commands: 0 00:15:45.247 Controller Busy Time: 0 minutes 00:15:45.247 Power Cycles: 0 00:15:45.247 Power On Hours: 0 hours 00:15:45.247 Unsafe Shutdowns: 0 00:15:45.247 Unrecoverable Media Errors: 0 00:15:45.247 Lifetime Error Log Entries: 0 00:15:45.247 Warning Temperature Time: 0 minutes 00:15:45.247 Critical Temperature Time: 0 minutes 00:15:45.247 00:15:45.247 Number of Queues 00:15:45.247 ================ 00:15:45.247 Number of I/O Submission Queues: 127 00:15:45.247 Number of I/O Completion Queues: 127 00:15:45.247 00:15:45.247 Active Namespaces 00:15:45.247 ================= 00:15:45.247 Namespace ID:1 00:15:45.247 Error Recovery Timeout: Unlimited 00:15:45.247 Command Set Identifier: NVM (00h) 00:15:45.247 Deallocate: Supported 00:15:45.247 Deallocated/Unwritten Error: Not Supported 00:15:45.247 Deallocated Read Value: Unknown 00:15:45.247 Deallocate in Write Zeroes: Not Supported 00:15:45.248 Deallocated Guard Field: 0xFFFF 00:15:45.248 Flush: Supported 00:15:45.248 Reservation: Supported 00:15:45.248 Namespace Sharing Capabilities: Multiple Controllers 00:15:45.248 Size (in LBAs): 131072 (0GiB) 00:15:45.248 Capacity (in LBAs): 131072 (0GiB) 00:15:45.248 Utilization (in LBAs): 131072 (0GiB) 00:15:45.248 NGUID: E60DE16C88C547909311BB1D7B925C81 00:15:45.248 UUID: e60de16c-88c5-4790-9311-bb1d7b925c81 00:15:45.248 Thin Provisioning: Not Supported 00:15:45.248 Per-NS Atomic Units: Yes 00:15:45.248 Atomic Boundary Size (Normal): 0 00:15:45.248 Atomic Boundary Size (PFail): 0 00:15:45.248 Atomic Boundary Offset: 0 00:15:45.248 Maximum Single Source Range Length: 65535 00:15:45.248 Maximum Copy Length: 65535 00:15:45.248 Maximum Source Range Count: 1 00:15:45.248 NGUID/EUI64 Never Reused: No 00:15:45.248 Namespace Write Protected: No 00:15:45.248 Number of LBA Formats: 1 00:15:45.248 Current LBA Format: LBA Format #00 00:15:45.248 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:45.248 00:15:45.248 17:26:24 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:45.248 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.516 Initializing NVMe Controllers 00:15:50.516 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:50.516 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:50.516 Initialization complete. Launching workers. 00:15:50.516 ======================================================== 00:15:50.516 Latency(us) 00:15:50.516 Device Information : IOPS MiB/s Average min max 00:15:50.516 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 42084.07 164.39 3040.70 948.72 9033.51 00:15:50.516 ======================================================== 00:15:50.516 Total : 42084.07 164.39 3040.70 948.72 9033.51 00:15:50.516 00:15:50.516 17:26:29 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:50.516 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.800 Initializing NVMe Controllers 00:15:55.800 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:55.800 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:55.800 Initialization complete. Launching workers. 00:15:55.800 ======================================================== 00:15:55.800 Latency(us) 00:15:55.800 Device Information : IOPS MiB/s Average min max 00:15:55.800 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24775.39 96.78 5165.61 1415.98 8994.70 00:15:55.800 ======================================================== 00:15:55.800 Total : 24775.39 96.78 5165.61 1415.98 8994.70 00:15:55.800 00:15:55.800 17:26:34 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:56.113 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.422 Initializing NVMe Controllers 00:16:01.422 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.422 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:01.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:01.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:01.422 Initialization complete. Launching workers. 00:16:01.422 Starting thread on core 2 00:16:01.422 Starting thread on core 3 00:16:01.422 Starting thread on core 1 00:16:01.422 17:26:40 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:01.422 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.712 Initializing NVMe Controllers 00:16:04.712 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.712 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.712 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:04.712 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:04.712 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:04.712 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:04.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:04.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:04.712 Initialization complete. Launching workers. 00:16:04.712 Starting thread on core 1 with urgent priority queue 00:16:04.712 Starting thread on core 2 with urgent priority queue 00:16:04.712 Starting thread on core 3 with urgent priority queue 00:16:04.712 Starting thread on core 0 with urgent priority queue 00:16:04.712 SPDK bdev Controller (SPDK2 ) core 0: 7825.00 IO/s 12.78 secs/100000 ios 00:16:04.712 SPDK bdev Controller (SPDK2 ) core 1: 8322.00 IO/s 12.02 secs/100000 ios 00:16:04.712 SPDK bdev Controller (SPDK2 ) core 2: 7261.67 IO/s 13.77 secs/100000 ios 00:16:04.712 SPDK bdev Controller (SPDK2 ) core 3: 8904.67 IO/s 11.23 secs/100000 ios 00:16:04.712 ======================================================== 00:16:04.712 00:16:04.712 17:26:43 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:04.712 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.972 Initializing NVMe Controllers 00:16:04.972 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.972 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.972 Namespace ID: 1 size: 0GB 00:16:04.972 Initialization complete. 00:16:04.972 INFO: using host memory buffer for IO 00:16:04.972 Hello world! 00:16:04.972 17:26:43 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:04.972 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.350 Initializing NVMe Controllers 00:16:06.350 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.350 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.350 Initialization complete. Launching workers. 00:16:06.350 submit (in ns) avg, min, max = 8949.2, 4484.5, 4002267.3 00:16:06.350 complete (in ns) avg, min, max = 28471.5, 2674.5, 3999790.9 00:16:06.350 00:16:06.350 Submit histogram 00:16:06.350 ================ 00:16:06.350 Range in us Cumulative Count 00:16:06.350 4.480 - 4.509: 0.4032% ( 50) 00:16:06.350 4.509 - 4.538: 1.7821% ( 171) 00:16:06.350 4.538 - 4.567: 3.7820% ( 248) 00:16:06.350 4.567 - 4.596: 6.9107% ( 388) 00:16:06.350 4.596 - 4.625: 16.4503% ( 1183) 00:16:06.350 4.625 - 4.655: 27.7881% ( 1406) 00:16:06.350 4.655 - 4.684: 39.8355% ( 1494) 00:16:06.350 4.684 - 4.713: 51.9232% ( 1499) 00:16:06.350 4.713 - 4.742: 62.0998% ( 1262) 00:16:06.350 4.742 - 4.771: 71.7039% ( 1191) 00:16:06.350 4.771 - 4.800: 78.5340% ( 847) 00:16:06.350 4.800 - 4.829: 83.0417% ( 559) 00:16:06.350 4.829 - 4.858: 85.9205% ( 357) 00:16:06.350 4.858 - 4.887: 87.7913% ( 232) 00:16:06.350 4.887 - 4.916: 89.4202% ( 202) 00:16:06.350 4.916 - 4.945: 91.3717% ( 242) 00:16:06.350 4.945 - 4.975: 93.3312% ( 243) 00:16:06.350 4.975 - 5.004: 95.2665% ( 240) 00:16:06.350 5.004 - 5.033: 96.5406% ( 158) 00:16:06.350 5.033 - 5.062: 97.7502% ( 150) 00:16:06.350 5.062 - 5.091: 98.5001% ( 93) 00:16:06.350 5.091 - 5.120: 98.9114% ( 51) 00:16:06.351 5.120 - 5.149: 99.2017% ( 36) 00:16:06.351 5.149 - 5.178: 99.3468% ( 18) 00:16:06.351 5.178 - 5.207: 99.4275% ( 10) 00:16:06.351 5.207 - 5.236: 99.4839% ( 7) 00:16:06.351 5.236 - 5.265: 99.5081% ( 3) 00:16:06.351 5.265 - 5.295: 99.5162% ( 1) 00:16:06.351 5.353 - 5.382: 99.5404% ( 3) 00:16:06.351 5.411 - 5.440: 99.5484% ( 1) 00:16:06.351 6.720 - 6.749: 99.5565% ( 1) 00:16:06.351 6.749 - 6.778: 99.5646% ( 1) 00:16:06.351 6.836 - 6.865: 99.5726% ( 1) 00:16:06.351 6.924 - 6.953: 99.5807% ( 1) 00:16:06.351 7.040 - 7.069: 99.5887% ( 1) 00:16:06.351 7.156 - 7.185: 99.5968% ( 1) 00:16:06.351 7.215 - 7.244: 99.6049% ( 1) 00:16:06.351 7.273 - 7.302: 99.6210% ( 2) 00:16:06.351 7.302 - 7.331: 99.6291% ( 1) 00:16:06.351 7.331 - 7.360: 99.6371% ( 1) 00:16:06.351 7.680 - 7.738: 99.6613% ( 3) 00:16:06.351 7.738 - 7.796: 99.6774% ( 2) 00:16:06.351 7.855 - 7.913: 99.6855% ( 1) 00:16:06.351 7.971 - 8.029: 99.6936% ( 1) 00:16:06.351 8.087 - 8.145: 99.7016% ( 1) 00:16:06.351 8.262 - 8.320: 99.7097% ( 1) 00:16:06.351 8.320 - 8.378: 99.7258% ( 2) 00:16:06.351 8.378 - 8.436: 99.7339% ( 1) 00:16:06.351 8.495 - 8.553: 99.7420% ( 1) 00:16:06.351 8.553 - 8.611: 99.7500% ( 1) 00:16:06.351 8.611 - 8.669: 99.7661% ( 2) 00:16:06.351 8.669 - 8.727: 99.7742% ( 1) 00:16:06.351 8.727 - 8.785: 99.7823% ( 1) 00:16:06.351 8.785 - 8.844: 99.7903% ( 1) 00:16:06.351 8.844 - 8.902: 99.8145% ( 3) 00:16:06.351 8.960 - 9.018: 99.8226% ( 1) 00:16:06.351 9.018 - 9.076: 99.8307% ( 1) 00:16:06.351 9.076 - 9.135: 99.8387% ( 1) 00:16:06.351 9.135 - 9.193: 99.8468% ( 1) 00:16:06.351 9.542 - 9.600: 99.8549% ( 1) 00:16:06.351 9.716 - 9.775: 99.8629% ( 1) 00:16:06.351 9.775 - 9.833: 99.8710% ( 1) 00:16:06.351 10.124 - 10.182: 99.8871% ( 2) 00:16:06.351 216.902 - 217.833: 99.8952% ( 1) 00:16:06.351 3991.738 - 4021.527: 100.0000% ( 13) 00:16:06.351 00:16:06.351 Complete histogram 00:16:06.351 ================== 00:16:06.351 Range in us Cumulative Count 00:16:06.351 2.662 - 2.676: 0.0081% ( 1) 00:16:06.351 2.676 - 2.691: 1.1612% ( 143) 00:16:06.351 2.691 - 2.705: 17.4099% ( 2015) 00:16:06.351 2.705 - 2.720: 57.3905% ( 4958) 00:16:06.351 2.720 - 2.735: 80.6628% ( 2886) 00:16:06.351 2.735 - 2.749: 86.8236% ( 764) 00:16:06.351 2.749 - 2.764: 92.2668% ( 675) 00:16:06.351 2.764 - 2.778: 96.3390% ( 505) 00:16:06.351 2.778 - 2.793: 97.7341% ( 173) 00:16:06.351 2.793 - 2.807: 98.3388% ( 75) 00:16:06.351 2.807 - 2.822: 98.7420% ( 50) 00:16:06.351 2.822 - 2.836: 98.8953% ( 19) 00:16:06.351 2.836 - 2.851: 98.9517% ( 7) 00:16:06.351 2.851 - 2.865: 98.9840% ( 4) 00:16:06.351 2.865 - 2.880: 99.0243% ( 5) 00:16:06.351 2.880 - 2.895: 99.0485% ( 3) 00:16:06.351 2.895 - 2.909: 99.1049% ( 7) 00:16:06.351 2.924 - 2.938: 99.1130% ( 1) 00:16:06.351 2.938 - 2.953: 99.1372% ( 3) 00:16:06.351 3.011 - 3.025: 99.1452% ( 1) 00:16:06.351 4.916 - 4.945: 99.1533% ( 1) 00:16:06.351 5.004 - 5.033: 99.1614% ( 1) 00:16:06.351 5.033 - 5.062: 99.1694% ( 1) 00:16:06.351 5.178 - 5.207: 99.1775% ( 1) 00:16:06.351 5.207 - 5.236: 99.1855% ( 1) 00:16:06.351 5.236 - 5.265: 99.1936% ( 1) 00:16:06.351 5.295 - 5.324: 99.2017% ( 1) 00:16:06.351 5.324 - 5.353: 99.2097% ( 1) 00:16:06.351 5.411 - 5.440: 99.2178% ( 1) 00:16:06.351 5.498 - 5.527: 99.2259% ( 1) 00:16:06.351 5.585 - 5.615: 99.2339% ( 1) 00:16:06.351 5.644 - 5.673: 99.2420% ( 1) 00:16:06.351 5.731 - 5.760: 99.2501% ( 1) 00:16:06.351 6.196 - 6.225: 99.2581% ( 1) 00:16:06.351 6.429 - 6.458: 99.2743% ( 2) 00:16:06.351 6.458 - 6.487: 99.2823% ( 1) 00:16:06.351 7.011 - 7.040: 99.2904% ( 1) 00:16:06.351 7.098 - 7.127: 99.2984% ( 1) 00:16:06.351 7.156 - 7.185: 99.3065% ( 1) 00:16:06.351 8.204 - 8.262: 99.3146% ( 1) 00:16:06.351 8.436 - 8.495: 99.3226% ( 1) 00:16:06.351 8.495 - 8.553: 99.3307% ( 1) 00:16:06.351 9.367 - 9.425: 99.3388% ( 1) 00:16:06.351 10.415 - 10.473: 99.3468% ( 1) 00:16:06.351 16.175 - 16.291: 99.3549% ( 1) 00:16:06.351 3589.585 - 3604.480: 99.3630% ( 1) 00:16:06.351 3991.738 - 4021.527: 100.0000% ( 79) 00:16:06.351 00:16:06.351 17:26:45 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:06.351 17:26:45 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:06.351 17:26:45 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:06.351 17:26:45 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:06.351 17:26:45 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:06.610 [ 00:16:06.610 { 00:16:06.610 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.610 "subtype": "Discovery", 00:16:06.610 "listen_addresses": [], 00:16:06.610 "allow_any_host": true, 00:16:06.610 "hosts": [] 00:16:06.610 }, 00:16:06.610 { 00:16:06.610 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:06.610 "subtype": "NVMe", 00:16:06.610 "listen_addresses": [ 00:16:06.610 { 00:16:06.610 "transport": "VFIOUSER", 00:16:06.610 "trtype": "VFIOUSER", 00:16:06.610 "adrfam": "IPv4", 00:16:06.610 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:06.610 "trsvcid": "0" 00:16:06.610 } 00:16:06.610 ], 00:16:06.610 "allow_any_host": true, 00:16:06.610 "hosts": [], 00:16:06.610 "serial_number": "SPDK1", 00:16:06.610 "model_number": "SPDK bdev Controller", 00:16:06.610 "max_namespaces": 32, 00:16:06.610 "min_cntlid": 1, 00:16:06.610 "max_cntlid": 65519, 00:16:06.610 "namespaces": [ 00:16:06.610 { 00:16:06.610 "nsid": 1, 00:16:06.610 "bdev_name": "Malloc1", 00:16:06.610 "name": "Malloc1", 00:16:06.610 "nguid": "AEF9FAAE202C41189FD95F13D12936C0", 00:16:06.610 "uuid": "aef9faae-202c-4118-9fd9-5f13d12936c0" 00:16:06.610 }, 00:16:06.610 { 00:16:06.610 "nsid": 2, 00:16:06.610 "bdev_name": "Malloc3", 00:16:06.610 "name": "Malloc3", 00:16:06.610 "nguid": "CB91E30F1FC2444589394079A3E8BD99", 00:16:06.610 "uuid": "cb91e30f-1fc2-4445-8939-4079a3e8bd99" 00:16:06.610 } 00:16:06.610 ] 00:16:06.610 }, 00:16:06.610 { 00:16:06.610 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:06.610 "subtype": "NVMe", 00:16:06.610 "listen_addresses": [ 00:16:06.610 { 00:16:06.610 "transport": "VFIOUSER", 00:16:06.610 "trtype": "VFIOUSER", 00:16:06.610 "adrfam": "IPv4", 00:16:06.610 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:06.610 "trsvcid": "0" 00:16:06.610 } 00:16:06.610 ], 00:16:06.610 "allow_any_host": true, 00:16:06.610 "hosts": [], 00:16:06.610 "serial_number": "SPDK2", 00:16:06.611 "model_number": "SPDK bdev Controller", 00:16:06.611 "max_namespaces": 32, 00:16:06.611 "min_cntlid": 1, 00:16:06.611 "max_cntlid": 65519, 00:16:06.611 "namespaces": [ 00:16:06.611 { 00:16:06.611 "nsid": 1, 00:16:06.611 "bdev_name": "Malloc2", 00:16:06.611 "name": "Malloc2", 00:16:06.611 "nguid": "E60DE16C88C547909311BB1D7B925C81", 00:16:06.611 "uuid": "e60de16c-88c5-4790-9311-bb1d7b925c81" 00:16:06.611 } 00:16:06.611 ] 00:16:06.611 } 00:16:06.611 ] 00:16:06.611 17:26:45 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:06.611 17:26:45 -- target/nvmf_vfio_user.sh@34 -- # aerpid=4073576 00:16:06.611 17:26:45 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:06.611 17:26:45 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:06.611 17:26:45 -- common/autotest_common.sh@1244 -- # local i=0 00:16:06.611 17:26:45 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:06.611 17:26:45 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:06.611 17:26:45 -- common/autotest_common.sh@1255 -- # return 0 00:16:06.611 17:26:45 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:06.611 17:26:45 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:06.611 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.870 Malloc4 00:16:06.870 17:26:45 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:07.128 17:26:46 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:07.128 Asynchronous Event Request test 00:16:07.128 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:07.128 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:07.128 Registering asynchronous event callbacks... 00:16:07.128 Starting namespace attribute notice tests for all controllers... 00:16:07.128 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:07.128 aer_cb - Changed Namespace 00:16:07.128 Cleaning up... 00:16:07.387 [ 00:16:07.387 { 00:16:07.387 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:07.387 "subtype": "Discovery", 00:16:07.387 "listen_addresses": [], 00:16:07.387 "allow_any_host": true, 00:16:07.387 "hosts": [] 00:16:07.387 }, 00:16:07.387 { 00:16:07.387 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:07.387 "subtype": "NVMe", 00:16:07.387 "listen_addresses": [ 00:16:07.387 { 00:16:07.387 "transport": "VFIOUSER", 00:16:07.387 "trtype": "VFIOUSER", 00:16:07.387 "adrfam": "IPv4", 00:16:07.387 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:07.387 "trsvcid": "0" 00:16:07.387 } 00:16:07.387 ], 00:16:07.387 "allow_any_host": true, 00:16:07.387 "hosts": [], 00:16:07.387 "serial_number": "SPDK1", 00:16:07.387 "model_number": "SPDK bdev Controller", 00:16:07.387 "max_namespaces": 32, 00:16:07.387 "min_cntlid": 1, 00:16:07.387 "max_cntlid": 65519, 00:16:07.387 "namespaces": [ 00:16:07.387 { 00:16:07.387 "nsid": 1, 00:16:07.387 "bdev_name": "Malloc1", 00:16:07.387 "name": "Malloc1", 00:16:07.387 "nguid": "AEF9FAAE202C41189FD95F13D12936C0", 00:16:07.387 "uuid": "aef9faae-202c-4118-9fd9-5f13d12936c0" 00:16:07.387 }, 00:16:07.387 { 00:16:07.387 "nsid": 2, 00:16:07.387 "bdev_name": "Malloc3", 00:16:07.387 "name": "Malloc3", 00:16:07.387 "nguid": "CB91E30F1FC2444589394079A3E8BD99", 00:16:07.387 "uuid": "cb91e30f-1fc2-4445-8939-4079a3e8bd99" 00:16:07.387 } 00:16:07.387 ] 00:16:07.387 }, 00:16:07.387 { 00:16:07.387 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:07.387 "subtype": "NVMe", 00:16:07.387 "listen_addresses": [ 00:16:07.387 { 00:16:07.387 "transport": "VFIOUSER", 00:16:07.387 "trtype": "VFIOUSER", 00:16:07.387 "adrfam": "IPv4", 00:16:07.387 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:07.387 "trsvcid": "0" 00:16:07.387 } 00:16:07.387 ], 00:16:07.387 "allow_any_host": true, 00:16:07.387 "hosts": [], 00:16:07.387 "serial_number": "SPDK2", 00:16:07.387 "model_number": "SPDK bdev Controller", 00:16:07.387 "max_namespaces": 32, 00:16:07.387 "min_cntlid": 1, 00:16:07.387 "max_cntlid": 65519, 00:16:07.387 "namespaces": [ 00:16:07.387 { 00:16:07.387 "nsid": 1, 00:16:07.387 "bdev_name": "Malloc2", 00:16:07.387 "name": "Malloc2", 00:16:07.387 "nguid": "E60DE16C88C547909311BB1D7B925C81", 00:16:07.387 "uuid": "e60de16c-88c5-4790-9311-bb1d7b925c81" 00:16:07.387 }, 00:16:07.387 { 00:16:07.387 "nsid": 2, 00:16:07.387 "bdev_name": "Malloc4", 00:16:07.387 "name": "Malloc4", 00:16:07.387 "nguid": "C380BB7EC85D4A2C85828A728AC189F6", 00:16:07.387 "uuid": "c380bb7e-c85d-4a2c-8582-8a728ac189f6" 00:16:07.387 } 00:16:07.387 ] 00:16:07.387 } 00:16:07.387 ] 00:16:07.387 17:26:46 -- target/nvmf_vfio_user.sh@44 -- # wait 4073576 00:16:07.387 17:26:46 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:07.387 17:26:46 -- target/nvmf_vfio_user.sh@95 -- # killprocess 4064768 00:16:07.387 17:26:46 -- common/autotest_common.sh@926 -- # '[' -z 4064768 ']' 00:16:07.387 17:26:46 -- common/autotest_common.sh@930 -- # kill -0 4064768 00:16:07.387 17:26:46 -- common/autotest_common.sh@931 -- # uname 00:16:07.387 17:26:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:07.387 17:26:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4064768 00:16:07.387 17:26:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:07.387 17:26:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:07.387 17:26:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4064768' 00:16:07.387 killing process with pid 4064768 00:16:07.388 17:26:46 -- common/autotest_common.sh@945 -- # kill 4064768 00:16:07.388 [2024-07-12 17:26:46.332329] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:07.388 17:26:46 -- common/autotest_common.sh@950 -- # wait 4064768 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4073817 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4073817' 00:16:07.647 Process pid: 4073817 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:07.647 17:26:46 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4073817 00:16:07.647 17:26:46 -- common/autotest_common.sh@819 -- # '[' -z 4073817 ']' 00:16:07.647 17:26:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.647 17:26:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:07.647 17:26:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.647 17:26:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:07.647 17:26:46 -- common/autotest_common.sh@10 -- # set +x 00:16:07.906 [2024-07-12 17:26:46.643357] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:07.906 [2024-07-12 17:26:46.644192] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:07.906 [2024-07-12 17:26:46.644230] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.906 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.906 [2024-07-12 17:26:46.715822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.906 [2024-07-12 17:26:46.756450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.906 [2024-07-12 17:26:46.756608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.906 [2024-07-12 17:26:46.756620] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.906 [2024-07-12 17:26:46.756629] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.906 [2024-07-12 17:26:46.756682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.906 [2024-07-12 17:26:46.756784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.906 [2024-07-12 17:26:46.756860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.906 [2024-07-12 17:26:46.756862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.906 [2024-07-12 17:26:46.835689] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:07.906 [2024-07-12 17:26:46.836040] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:07.906 [2024-07-12 17:26:46.836372] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:07.906 [2024-07-12 17:26:46.837217] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:07.906 [2024-07-12 17:26:46.837415] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:08.843 17:26:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:08.843 17:26:47 -- common/autotest_common.sh@852 -- # return 0 00:16:08.843 17:26:47 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:09.779 17:26:48 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:09.779 17:26:48 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:10.038 17:26:48 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:10.038 17:26:48 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.038 17:26:48 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:10.038 17:26:48 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:10.038 Malloc1 00:16:10.296 17:26:49 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:10.296 17:26:49 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:10.554 17:26:49 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:10.813 17:26:49 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.813 17:26:49 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:10.813 17:26:49 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:11.073 Malloc2 00:16:11.073 17:26:50 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:11.332 17:26:50 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:11.591 17:26:50 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:11.850 17:26:50 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:11.851 17:26:50 -- target/nvmf_vfio_user.sh@95 -- # killprocess 4073817 00:16:11.851 17:26:50 -- common/autotest_common.sh@926 -- # '[' -z 4073817 ']' 00:16:11.851 17:26:50 -- common/autotest_common.sh@930 -- # kill -0 4073817 00:16:11.851 17:26:50 -- common/autotest_common.sh@931 -- # uname 00:16:11.851 17:26:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:11.851 17:26:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4073817 00:16:11.851 17:26:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:11.851 17:26:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:11.851 17:26:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4073817' 00:16:11.851 killing process with pid 4073817 00:16:11.851 17:26:50 -- common/autotest_common.sh@945 -- # kill 4073817 00:16:11.851 17:26:50 -- common/autotest_common.sh@950 -- # wait 4073817 00:16:12.110 17:26:51 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:12.110 17:26:51 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:12.110 00:16:12.110 real 0m53.473s 00:16:12.110 user 3m31.911s 00:16:12.110 sys 0m4.069s 00:16:12.110 17:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.110 17:26:51 -- common/autotest_common.sh@10 -- # set +x 00:16:12.110 ************************************ 00:16:12.110 END TEST nvmf_vfio_user 00:16:12.110 ************************************ 00:16:12.110 17:26:51 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:12.110 17:26:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:12.110 17:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.110 17:26:51 -- common/autotest_common.sh@10 -- # set +x 00:16:12.110 ************************************ 00:16:12.110 START TEST nvmf_vfio_user_nvme_compliance 00:16:12.110 ************************************ 00:16:12.110 17:26:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:12.370 * Looking for test storage... 00:16:12.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:12.370 17:26:51 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.370 17:26:51 -- nvmf/common.sh@7 -- # uname -s 00:16:12.370 17:26:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.370 17:26:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.370 17:26:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.370 17:26:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.370 17:26:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.370 17:26:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.370 17:26:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.370 17:26:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.370 17:26:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.370 17:26:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.370 17:26:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:12.370 17:26:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:12.370 17:26:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.370 17:26:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.370 17:26:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.370 17:26:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.370 17:26:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.370 17:26:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.370 17:26:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.370 17:26:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.370 17:26:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.370 17:26:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.370 17:26:51 -- paths/export.sh@5 -- # export PATH 00:16:12.370 17:26:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.370 17:26:51 -- nvmf/common.sh@46 -- # : 0 00:16:12.370 17:26:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:12.370 17:26:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:12.370 17:26:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:12.370 17:26:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.370 17:26:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.370 17:26:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:12.370 17:26:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:12.370 17:26:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:12.370 17:26:51 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.370 17:26:51 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.370 17:26:51 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:12.370 17:26:51 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:12.370 17:26:51 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:12.370 17:26:51 -- compliance/compliance.sh@20 -- # nvmfpid=4074720 00:16:12.370 17:26:51 -- compliance/compliance.sh@21 -- # echo 'Process pid: 4074720' 00:16:12.370 Process pid: 4074720 00:16:12.370 17:26:51 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:12.370 17:26:51 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:12.370 17:26:51 -- compliance/compliance.sh@24 -- # waitforlisten 4074720 00:16:12.370 17:26:51 -- common/autotest_common.sh@819 -- # '[' -z 4074720 ']' 00:16:12.370 17:26:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.370 17:26:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:12.370 17:26:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.370 17:26:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:12.370 17:26:51 -- common/autotest_common.sh@10 -- # set +x 00:16:12.370 [2024-07-12 17:26:51.220045] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:12.370 [2024-07-12 17:26:51.220109] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.370 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.370 [2024-07-12 17:26:51.302517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:12.629 [2024-07-12 17:26:51.344325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:12.629 [2024-07-12 17:26:51.344478] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.629 [2024-07-12 17:26:51.344489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.629 [2024-07-12 17:26:51.344499] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.629 [2024-07-12 17:26:51.344539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.629 [2024-07-12 17:26:51.344642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.629 [2024-07-12 17:26:51.344646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.198 17:26:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:13.198 17:26:52 -- common/autotest_common.sh@852 -- # return 0 00:16:13.198 17:26:52 -- compliance/compliance.sh@26 -- # sleep 1 00:16:14.576 17:26:53 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:14.576 17:26:53 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:14.576 17:26:53 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:14.576 17:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.576 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:14.576 17:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.576 17:26:53 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:14.576 17:26:53 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:14.576 17:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.576 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:14.576 malloc0 00:16:14.576 17:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.576 17:26:53 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:14.576 17:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.576 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:14.576 17:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.576 17:26:53 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:14.576 17:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.576 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:14.576 17:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.576 17:26:53 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:14.576 17:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.576 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:14.576 17:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.576 17:26:53 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:14.576 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.576 00:16:14.576 00:16:14.576 CUnit - A unit testing framework for C - Version 2.1-3 00:16:14.576 http://cunit.sourceforge.net/ 00:16:14.576 00:16:14.576 00:16:14.576 Suite: nvme_compliance 00:16:14.576 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 17:26:53.413319] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:14.576 [2024-07-12 17:26:53.413373] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:14.576 [2024-07-12 17:26:53.413383] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:14.576 passed 00:16:14.834 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:14.834 Test: admin_identify_ns ...[2024-07-12 17:26:53.681274] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:14.834 [2024-07-12 17:26:53.689268] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:14.834 passed 00:16:15.092 Test: admin_get_features_mandatory_features ...passed 00:16:15.092 Test: admin_get_features_optional_features ...passed 00:16:15.350 Test: admin_set_features_number_of_queues ...passed 00:16:15.350 Test: admin_get_log_page_mandatory_logs ...passed 00:16:15.610 Test: admin_get_log_page_with_lpo ...[2024-07-12 17:26:54.392273] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:15.610 passed 00:16:15.610 Test: fabric_property_get ...passed 00:16:15.869 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 17:26:54.606277] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:15.869 passed 00:16:15.869 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 17:26:54.791265] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:15.869 [2024-07-12 17:26:54.807275] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:16.127 passed 00:16:16.127 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 17:26:54.909429] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:16.127 passed 00:16:16.127 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 17:26:55.086266] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:16.386 [2024-07-12 17:26:55.110276] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:16.386 passed 00:16:16.386 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 17:26:55.213468] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:16.386 [2024-07-12 17:26:55.213503] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:16.386 passed 00:16:16.644 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 17:26:55.407268] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:16.644 [2024-07-12 17:26:55.415267] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:16.644 [2024-07-12 17:26:55.423282] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:16.644 [2024-07-12 17:26:55.431276] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:16.644 passed 00:16:16.644 Test: admin_create_io_sq_verify_pc ...[2024-07-12 17:26:55.571274] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:16.903 passed 00:16:17.840 Test: admin_create_io_qp_max_qps ...[2024-07-12 17:26:56.785268] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:18.408 passed 00:16:18.667 Test: admin_create_io_sq_shared_cq ...[2024-07-12 17:26:57.392267] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:18.667 passed 00:16:18.667 00:16:18.667 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.667 suites 1 1 n/a 0 0 00:16:18.667 tests 18 18 18 0 0 00:16:18.667 asserts 360 360 360 0 n/a 00:16:18.667 00:16:18.667 Elapsed time = 1.695 seconds 00:16:18.667 17:26:57 -- compliance/compliance.sh@42 -- # killprocess 4074720 00:16:18.667 17:26:57 -- common/autotest_common.sh@926 -- # '[' -z 4074720 ']' 00:16:18.667 17:26:57 -- common/autotest_common.sh@930 -- # kill -0 4074720 00:16:18.667 17:26:57 -- common/autotest_common.sh@931 -- # uname 00:16:18.667 17:26:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:18.667 17:26:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4074720 00:16:18.667 17:26:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:18.667 17:26:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:18.667 17:26:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4074720' 00:16:18.667 killing process with pid 4074720 00:16:18.667 17:26:57 -- common/autotest_common.sh@945 -- # kill 4074720 00:16:18.667 17:26:57 -- common/autotest_common.sh@950 -- # wait 4074720 00:16:18.938 17:26:57 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:18.938 17:26:57 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:18.938 00:16:18.938 real 0m6.686s 00:16:18.938 user 0m19.276s 00:16:18.938 sys 0m0.508s 00:16:18.938 17:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.938 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:18.938 ************************************ 00:16:18.938 END TEST nvmf_vfio_user_nvme_compliance 00:16:18.938 ************************************ 00:16:18.938 17:26:57 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:18.938 17:26:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:18.938 17:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:18.938 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:18.938 ************************************ 00:16:18.938 START TEST nvmf_vfio_user_fuzz 00:16:18.938 ************************************ 00:16:18.938 17:26:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:18.938 * Looking for test storage... 00:16:18.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.938 17:26:57 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.938 17:26:57 -- nvmf/common.sh@7 -- # uname -s 00:16:18.938 17:26:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.938 17:26:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.938 17:26:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.938 17:26:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.938 17:26:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.938 17:26:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.939 17:26:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.939 17:26:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.939 17:26:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.939 17:26:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.939 17:26:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:18.939 17:26:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:18.939 17:26:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.939 17:26:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.939 17:26:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.939 17:26:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.939 17:26:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.939 17:26:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.939 17:26:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.944 17:26:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.945 17:26:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.945 17:26:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.945 17:26:57 -- paths/export.sh@5 -- # export PATH 00:16:18.945 17:26:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.945 17:26:57 -- nvmf/common.sh@46 -- # : 0 00:16:18.945 17:26:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:18.945 17:26:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:18.945 17:26:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:18.945 17:26:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.945 17:26:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.945 17:26:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:18.945 17:26:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:18.945 17:26:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:18.945 17:26:57 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:18.945 17:26:57 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:18.945 17:26:57 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:18.945 17:26:57 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:18.945 17:26:57 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:18.946 17:26:57 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:18.946 17:26:57 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:18.946 17:26:57 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4075857 00:16:18.946 17:26:57 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4075857' 00:16:18.946 Process pid: 4075857 00:16:18.946 17:26:57 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:18.946 17:26:57 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:18.946 17:26:57 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4075857 00:16:18.946 17:26:57 -- common/autotest_common.sh@819 -- # '[' -z 4075857 ']' 00:16:18.946 17:26:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.946 17:26:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:18.946 17:26:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.946 17:26:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:18.946 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:20.329 17:26:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:20.329 17:26:58 -- common/autotest_common.sh@852 -- # return 0 00:16:20.329 17:26:58 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:21.265 17:26:59 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:21.265 17:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.265 17:26:59 -- common/autotest_common.sh@10 -- # set +x 00:16:21.265 17:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.265 17:26:59 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:21.265 17:26:59 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:21.265 17:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.265 17:26:59 -- common/autotest_common.sh@10 -- # set +x 00:16:21.265 malloc0 00:16:21.265 17:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.265 17:26:59 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:21.265 17:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.265 17:26:59 -- common/autotest_common.sh@10 -- # set +x 00:16:21.265 17:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.265 17:26:59 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:21.265 17:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.265 17:26:59 -- common/autotest_common.sh@10 -- # set +x 00:16:21.265 17:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.265 17:26:59 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:21.265 17:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.265 17:26:59 -- common/autotest_common.sh@10 -- # set +x 00:16:21.265 17:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.265 17:26:59 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:21.265 17:26:59 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:53.353 Fuzzing completed. Shutting down the fuzz application 00:16:53.353 00:16:53.353 Dumping successful admin opcodes: 00:16:53.353 8, 9, 10, 24, 00:16:53.353 Dumping successful io opcodes: 00:16:53.353 0, 00:16:53.353 NS: 0x200003a1ef00 I/O qp, Total commands completed: 719382, total successful commands: 2799, random_seed: 3363711936 00:16:53.353 NS: 0x200003a1ef00 admin qp, Total commands completed: 177198, total successful commands: 1431, random_seed: 3593343616 00:16:53.353 17:27:30 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:53.353 17:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.353 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.353 17:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.353 17:27:30 -- target/vfio_user_fuzz.sh@46 -- # killprocess 4075857 00:16:53.353 17:27:30 -- common/autotest_common.sh@926 -- # '[' -z 4075857 ']' 00:16:53.353 17:27:30 -- common/autotest_common.sh@930 -- # kill -0 4075857 00:16:53.353 17:27:30 -- common/autotest_common.sh@931 -- # uname 00:16:53.353 17:27:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:53.353 17:27:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4075857 00:16:53.353 17:27:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:53.353 17:27:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:53.353 17:27:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4075857' 00:16:53.353 killing process with pid 4075857 00:16:53.353 17:27:30 -- common/autotest_common.sh@945 -- # kill 4075857 00:16:53.353 17:27:30 -- common/autotest_common.sh@950 -- # wait 4075857 00:16:53.353 17:27:30 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:53.353 17:27:30 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:53.353 00:16:53.353 real 0m32.901s 00:16:53.353 user 0m36.439s 00:16:53.353 sys 0m23.988s 00:16:53.353 17:27:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.353 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.353 ************************************ 00:16:53.353 END TEST nvmf_vfio_user_fuzz 00:16:53.353 ************************************ 00:16:53.353 17:27:30 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.353 17:27:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:53.353 17:27:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:53.353 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.353 ************************************ 00:16:53.353 START TEST nvmf_host_management 00:16:53.353 ************************************ 00:16:53.353 17:27:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.353 * Looking for test storage... 00:16:53.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.353 17:27:30 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.353 17:27:30 -- nvmf/common.sh@7 -- # uname -s 00:16:53.353 17:27:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.353 17:27:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.353 17:27:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.353 17:27:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.353 17:27:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.353 17:27:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.353 17:27:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.353 17:27:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.353 17:27:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.353 17:27:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.353 17:27:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:53.353 17:27:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:53.353 17:27:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.353 17:27:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.353 17:27:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.353 17:27:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.353 17:27:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.353 17:27:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.353 17:27:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.353 17:27:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.353 17:27:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.353 17:27:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.353 17:27:30 -- paths/export.sh@5 -- # export PATH 00:16:53.353 17:27:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.353 17:27:30 -- nvmf/common.sh@46 -- # : 0 00:16:53.353 17:27:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:53.353 17:27:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:53.353 17:27:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:53.353 17:27:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.353 17:27:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.353 17:27:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:53.353 17:27:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:53.353 17:27:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:53.353 17:27:30 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.353 17:27:30 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.353 17:27:30 -- target/host_management.sh@104 -- # nvmftestinit 00:16:53.353 17:27:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:53.353 17:27:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.353 17:27:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:53.353 17:27:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:53.353 17:27:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:53.353 17:27:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.353 17:27:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.353 17:27:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.353 17:27:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:53.353 17:27:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:53.353 17:27:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:53.353 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.544 17:27:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:57.544 17:27:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:57.544 17:27:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:57.544 17:27:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:57.544 17:27:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:57.544 17:27:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:57.544 17:27:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:57.544 17:27:36 -- nvmf/common.sh@294 -- # net_devs=() 00:16:57.544 17:27:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:57.544 17:27:36 -- nvmf/common.sh@295 -- # e810=() 00:16:57.544 17:27:36 -- nvmf/common.sh@295 -- # local -ga e810 00:16:57.544 17:27:36 -- nvmf/common.sh@296 -- # x722=() 00:16:57.544 17:27:36 -- nvmf/common.sh@296 -- # local -ga x722 00:16:57.544 17:27:36 -- nvmf/common.sh@297 -- # mlx=() 00:16:57.544 17:27:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:57.544 17:27:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.544 17:27:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.544 17:27:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.544 17:27:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.544 17:27:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.544 17:27:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.545 17:27:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.545 17:27:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.545 17:27:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.545 17:27:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.545 17:27:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.545 17:27:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:57.545 17:27:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:57.545 17:27:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:57.545 17:27:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:57.545 17:27:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:57.545 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:57.545 17:27:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:57.545 17:27:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:57.545 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:57.545 17:27:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:57.545 17:27:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:57.545 17:27:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.545 17:27:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:57.545 17:27:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.545 17:27:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:57.545 Found net devices under 0000:af:00.0: cvl_0_0 00:16:57.545 17:27:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.545 17:27:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:57.545 17:27:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.545 17:27:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:57.545 17:27:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.545 17:27:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:57.545 Found net devices under 0000:af:00.1: cvl_0_1 00:16:57.545 17:27:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.545 17:27:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:57.545 17:27:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:57.545 17:27:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:57.545 17:27:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.545 17:27:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.545 17:27:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.545 17:27:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:57.545 17:27:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.545 17:27:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.545 17:27:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:57.545 17:27:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.545 17:27:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.545 17:27:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:57.545 17:27:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:57.545 17:27:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.545 17:27:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.545 17:27:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.545 17:27:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.545 17:27:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:57.545 17:27:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.545 17:27:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.545 17:27:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.545 17:27:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:57.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:16:57.545 00:16:57.545 --- 10.0.0.2 ping statistics --- 00:16:57.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.545 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:57.545 17:27:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:16:57.545 00:16:57.545 --- 10.0.0.1 ping statistics --- 00:16:57.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.545 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:16:57.545 17:27:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.545 17:27:36 -- nvmf/common.sh@410 -- # return 0 00:16:57.545 17:27:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:57.545 17:27:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.545 17:27:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:57.545 17:27:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.545 17:27:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:57.545 17:27:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:57.545 17:27:36 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:57.545 17:27:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:57.545 17:27:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:57.545 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.545 ************************************ 00:16:57.545 START TEST nvmf_host_management 00:16:57.545 ************************************ 00:16:57.545 17:27:36 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:57.545 17:27:36 -- target/host_management.sh@69 -- # starttarget 00:16:57.545 17:27:36 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:57.545 17:27:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:57.545 17:27:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:57.545 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.545 17:27:36 -- nvmf/common.sh@469 -- # nvmfpid=4085617 00:16:57.545 17:27:36 -- nvmf/common.sh@470 -- # waitforlisten 4085617 00:16:57.545 17:27:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:57.545 17:27:36 -- common/autotest_common.sh@819 -- # '[' -z 4085617 ']' 00:16:57.545 17:27:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.545 17:27:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:57.545 17:27:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.545 17:27:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:57.545 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.545 [2024-07-12 17:27:36.505778] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:57.545 [2024-07-12 17:27:36.505833] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.803 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.803 [2024-07-12 17:27:36.582840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.803 [2024-07-12 17:27:36.626658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.803 [2024-07-12 17:27:36.626811] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.803 [2024-07-12 17:27:36.626825] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.803 [2024-07-12 17:27:36.626834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.803 [2024-07-12 17:27:36.626941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.803 [2024-07-12 17:27:36.627033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.803 [2024-07-12 17:27:36.627146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.803 [2024-07-12 17:27:36.627146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:58.740 17:27:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:58.740 17:27:37 -- common/autotest_common.sh@852 -- # return 0 00:16:58.740 17:27:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:58.740 17:27:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:58.740 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.740 17:27:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.740 17:27:37 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.740 17:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:58.740 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.740 [2024-07-12 17:27:37.477305] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.740 17:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:58.740 17:27:37 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:58.740 17:27:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:58.740 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.740 17:27:37 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:58.740 17:27:37 -- target/host_management.sh@23 -- # cat 00:16:58.740 17:27:37 -- target/host_management.sh@30 -- # rpc_cmd 00:16:58.740 17:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:58.740 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.740 Malloc0 00:16:58.740 [2024-07-12 17:27:37.541477] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.740 17:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:58.740 17:27:37 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:58.740 17:27:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:58.740 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.740 17:27:37 -- target/host_management.sh@73 -- # perfpid=4085807 00:16:58.740 17:27:37 -- target/host_management.sh@74 -- # waitforlisten 4085807 /var/tmp/bdevperf.sock 00:16:58.740 17:27:37 -- common/autotest_common.sh@819 -- # '[' -z 4085807 ']' 00:16:58.740 17:27:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.740 17:27:37 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:58.740 17:27:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.740 17:27:37 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:58.740 17:27:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.740 17:27:37 -- nvmf/common.sh@520 -- # config=() 00:16:58.740 17:27:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.740 17:27:37 -- nvmf/common.sh@520 -- # local subsystem config 00:16:58.740 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.740 17:27:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:58.740 17:27:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:58.740 { 00:16:58.740 "params": { 00:16:58.740 "name": "Nvme$subsystem", 00:16:58.740 "trtype": "$TEST_TRANSPORT", 00:16:58.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.740 "adrfam": "ipv4", 00:16:58.740 "trsvcid": "$NVMF_PORT", 00:16:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.740 "hdgst": ${hdgst:-false}, 00:16:58.740 "ddgst": ${ddgst:-false} 00:16:58.740 }, 00:16:58.740 "method": "bdev_nvme_attach_controller" 00:16:58.740 } 00:16:58.740 EOF 00:16:58.740 )") 00:16:58.740 17:27:37 -- nvmf/common.sh@542 -- # cat 00:16:58.740 17:27:37 -- nvmf/common.sh@544 -- # jq . 00:16:58.740 17:27:37 -- nvmf/common.sh@545 -- # IFS=, 00:16:58.740 17:27:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:58.740 "params": { 00:16:58.740 "name": "Nvme0", 00:16:58.740 "trtype": "tcp", 00:16:58.740 "traddr": "10.0.0.2", 00:16:58.740 "adrfam": "ipv4", 00:16:58.740 "trsvcid": "4420", 00:16:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:58.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:58.740 "hdgst": false, 00:16:58.740 "ddgst": false 00:16:58.740 }, 00:16:58.740 "method": "bdev_nvme_attach_controller" 00:16:58.740 }' 00:16:58.740 [2024-07-12 17:27:37.635938] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:58.740 [2024-07-12 17:27:37.635996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085807 ] 00:16:58.740 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.999 [2024-07-12 17:27:37.719200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.999 [2024-07-12 17:27:37.760053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.999 Running I/O for 10 seconds... 00:16:59.620 17:27:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:59.620 17:27:38 -- common/autotest_common.sh@852 -- # return 0 00:16:59.620 17:27:38 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:59.620 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.620 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:16:59.620 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.620 17:27:38 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:59.879 17:27:38 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:59.879 17:27:38 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:59.879 17:27:38 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:59.879 17:27:38 -- target/host_management.sh@52 -- # local ret=1 00:16:59.879 17:27:38 -- target/host_management.sh@53 -- # local i 00:16:59.879 17:27:38 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:59.879 17:27:38 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:59.879 17:27:38 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:59.879 17:27:38 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:59.879 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.879 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:16:59.879 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.879 17:27:38 -- target/host_management.sh@55 -- # read_io_count=1593 00:16:59.879 17:27:38 -- target/host_management.sh@58 -- # '[' 1593 -ge 100 ']' 00:16:59.879 17:27:38 -- target/host_management.sh@59 -- # ret=0 00:16:59.879 17:27:38 -- target/host_management.sh@60 -- # break 00:16:59.879 17:27:38 -- target/host_management.sh@64 -- # return 0 00:16:59.879 17:27:38 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:59.879 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.879 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:16:59.879 [2024-07-12 17:27:38.641651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.641983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.642001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.642020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.642039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.642057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.642076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.642095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.642114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.879 [2024-07-12 17:27:38.642132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.880 [2024-07-12 17:27:38.642150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17369f0 is same with the state(5) to be set 00:16:59.880 [2024-07-12 17:27:38.643891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.643936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.643957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.643969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.643982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.643992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.880 [2024-07-12 17:27:38.644914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.880 [2024-07-12 17:27:38.644925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.644938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.644949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.644963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.644974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.644987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.644998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.881 [2024-07-12 17:27:38.645459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.881 [2024-07-12 17:27:38.645532] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d3f50 was disconnected and freed. reset controller. 00:16:59.881 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.881 17:27:38 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:59.881 17:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.881 [2024-07-12 17:27:38.646901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:59.881 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:16:59.881 task offset: 90880 on job bdev=Nvme0n1 fails 00:16:59.881 00:16:59.881 Latency(us) 00:16:59.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.881 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:59.881 Job: Nvme0n1 ended in about 0.69 seconds with error 00:16:59.881 Verification LBA range: start 0x0 length 0x400 00:16:59.881 Nvme0n1 : 0.69 2495.93 156.00 93.03 0.00 24340.69 2040.55 36461.85 00:16:59.881 =================================================================================================================== 00:16:59.881 Total : 2495.93 156.00 93.03 0.00 24340.69 2040.55 36461.85 00:16:59.881 [2024-07-12 17:27:38.649221] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:59.881 [2024-07-12 17:27:38.649243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d6380 (9): Bad file descriptor 00:16:59.881 17:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.881 17:27:38 -- target/host_management.sh@87 -- # sleep 1 00:16:59.881 [2024-07-12 17:27:38.700393] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:00.868 17:27:39 -- target/host_management.sh@91 -- # kill -9 4085807 00:17:00.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4085807) - No such process 00:17:00.868 17:27:39 -- target/host_management.sh@91 -- # true 00:17:00.868 17:27:39 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:00.868 17:27:39 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:00.868 17:27:39 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:00.868 17:27:39 -- nvmf/common.sh@520 -- # config=() 00:17:00.868 17:27:39 -- nvmf/common.sh@520 -- # local subsystem config 00:17:00.868 17:27:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:00.868 17:27:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:00.868 { 00:17:00.868 "params": { 00:17:00.868 "name": "Nvme$subsystem", 00:17:00.868 "trtype": "$TEST_TRANSPORT", 00:17:00.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.868 "adrfam": "ipv4", 00:17:00.868 "trsvcid": "$NVMF_PORT", 00:17:00.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.868 "hdgst": ${hdgst:-false}, 00:17:00.868 "ddgst": ${ddgst:-false} 00:17:00.868 }, 00:17:00.868 "method": "bdev_nvme_attach_controller" 00:17:00.868 } 00:17:00.868 EOF 00:17:00.868 )") 00:17:00.868 17:27:39 -- nvmf/common.sh@542 -- # cat 00:17:00.868 17:27:39 -- nvmf/common.sh@544 -- # jq . 00:17:00.868 17:27:39 -- nvmf/common.sh@545 -- # IFS=, 00:17:00.868 17:27:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:00.868 "params": { 00:17:00.868 "name": "Nvme0", 00:17:00.868 "trtype": "tcp", 00:17:00.868 "traddr": "10.0.0.2", 00:17:00.868 "adrfam": "ipv4", 00:17:00.868 "trsvcid": "4420", 00:17:00.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:00.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:00.868 "hdgst": false, 00:17:00.868 "ddgst": false 00:17:00.868 }, 00:17:00.868 "method": "bdev_nvme_attach_controller" 00:17:00.868 }' 00:17:00.868 [2024-07-12 17:27:39.711876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:00.868 [2024-07-12 17:27:39.711936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4086215 ] 00:17:00.868 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.868 [2024-07-12 17:27:39.795145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.868 [2024-07-12 17:27:39.833748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.435 Running I/O for 1 seconds... 00:17:02.370 00:17:02.370 Latency(us) 00:17:02.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.370 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.370 Verification LBA range: start 0x0 length 0x400 00:17:02.370 Nvme0n1 : 1.06 2423.94 151.50 0.00 0.00 24900.26 5242.88 46709.29 00:17:02.370 =================================================================================================================== 00:17:02.370 Total : 2423.94 151.50 0.00 0.00 24900.26 5242.88 46709.29 00:17:02.629 17:27:41 -- target/host_management.sh@101 -- # stoptarget 00:17:02.629 17:27:41 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:02.629 17:27:41 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:02.629 17:27:41 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:02.629 17:27:41 -- target/host_management.sh@40 -- # nvmftestfini 00:17:02.629 17:27:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:02.629 17:27:41 -- nvmf/common.sh@116 -- # sync 00:17:02.629 17:27:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:02.629 17:27:41 -- nvmf/common.sh@119 -- # set +e 00:17:02.629 17:27:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:02.629 17:27:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:02.629 rmmod nvme_tcp 00:17:02.629 rmmod nvme_fabrics 00:17:02.629 rmmod nvme_keyring 00:17:02.629 17:27:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:02.629 17:27:41 -- nvmf/common.sh@123 -- # set -e 00:17:02.629 17:27:41 -- nvmf/common.sh@124 -- # return 0 00:17:02.629 17:27:41 -- nvmf/common.sh@477 -- # '[' -n 4085617 ']' 00:17:02.629 17:27:41 -- nvmf/common.sh@478 -- # killprocess 4085617 00:17:02.629 17:27:41 -- common/autotest_common.sh@926 -- # '[' -z 4085617 ']' 00:17:02.629 17:27:41 -- common/autotest_common.sh@930 -- # kill -0 4085617 00:17:02.629 17:27:41 -- common/autotest_common.sh@931 -- # uname 00:17:02.629 17:27:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:02.629 17:27:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4085617 00:17:02.629 17:27:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:02.629 17:27:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:02.629 17:27:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4085617' 00:17:02.629 killing process with pid 4085617 00:17:02.629 17:27:41 -- common/autotest_common.sh@945 -- # kill 4085617 00:17:02.629 17:27:41 -- common/autotest_common.sh@950 -- # wait 4085617 00:17:02.889 [2024-07-12 17:27:41.650363] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:02.889 17:27:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:02.889 17:27:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:02.889 17:27:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:02.889 17:27:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.889 17:27:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:02.889 17:27:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.889 17:27:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.889 17:27:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.793 17:27:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:04.793 00:17:04.793 real 0m7.298s 00:17:04.793 user 0m22.967s 00:17:04.793 sys 0m1.288s 00:17:04.793 17:27:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.793 17:27:43 -- common/autotest_common.sh@10 -- # set +x 00:17:04.793 ************************************ 00:17:04.793 END TEST nvmf_host_management 00:17:04.793 ************************************ 00:17:05.053 17:27:43 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:05.053 00:17:05.053 real 0m13.060s 00:17:05.053 user 0m24.551s 00:17:05.053 sys 0m5.478s 00:17:05.053 17:27:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.053 17:27:43 -- common/autotest_common.sh@10 -- # set +x 00:17:05.053 ************************************ 00:17:05.053 END TEST nvmf_host_management 00:17:05.053 ************************************ 00:17:05.053 17:27:43 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:05.053 17:27:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:05.053 17:27:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:05.053 17:27:43 -- common/autotest_common.sh@10 -- # set +x 00:17:05.053 ************************************ 00:17:05.053 START TEST nvmf_lvol 00:17:05.053 ************************************ 00:17:05.053 17:27:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:05.053 * Looking for test storage... 00:17:05.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.053 17:27:43 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.053 17:27:43 -- nvmf/common.sh@7 -- # uname -s 00:17:05.053 17:27:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.053 17:27:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.053 17:27:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.053 17:27:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.053 17:27:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.053 17:27:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.053 17:27:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.053 17:27:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.053 17:27:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.053 17:27:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.053 17:27:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:05.053 17:27:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:05.053 17:27:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.053 17:27:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.053 17:27:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.053 17:27:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.053 17:27:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.053 17:27:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.053 17:27:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.053 17:27:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.053 17:27:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.053 17:27:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.053 17:27:43 -- paths/export.sh@5 -- # export PATH 00:17:05.053 17:27:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.053 17:27:43 -- nvmf/common.sh@46 -- # : 0 00:17:05.053 17:27:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:05.053 17:27:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:05.053 17:27:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:05.053 17:27:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.053 17:27:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.053 17:27:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:05.053 17:27:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:05.053 17:27:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:05.053 17:27:43 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.053 17:27:43 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.053 17:27:43 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:05.053 17:27:43 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:05.053 17:27:43 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.053 17:27:43 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:05.053 17:27:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:05.053 17:27:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.053 17:27:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:05.053 17:27:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:05.053 17:27:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:05.053 17:27:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.053 17:27:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.053 17:27:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.053 17:27:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:05.053 17:27:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:05.053 17:27:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:05.053 17:27:43 -- common/autotest_common.sh@10 -- # set +x 00:17:11.639 17:27:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:11.639 17:27:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:11.639 17:27:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:11.639 17:27:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:11.639 17:27:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:11.639 17:27:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:11.639 17:27:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:11.639 17:27:49 -- nvmf/common.sh@294 -- # net_devs=() 00:17:11.639 17:27:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:11.639 17:27:49 -- nvmf/common.sh@295 -- # e810=() 00:17:11.639 17:27:49 -- nvmf/common.sh@295 -- # local -ga e810 00:17:11.639 17:27:49 -- nvmf/common.sh@296 -- # x722=() 00:17:11.639 17:27:49 -- nvmf/common.sh@296 -- # local -ga x722 00:17:11.639 17:27:49 -- nvmf/common.sh@297 -- # mlx=() 00:17:11.639 17:27:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:11.639 17:27:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.639 17:27:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:11.639 17:27:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:11.639 17:27:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:11.640 17:27:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:11.640 17:27:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:11.640 17:27:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:11.640 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:11.640 17:27:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:11.640 17:27:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:11.640 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:11.640 17:27:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:11.640 17:27:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:11.640 17:27:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.640 17:27:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:11.640 17:27:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.640 17:27:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:11.640 Found net devices under 0000:af:00.0: cvl_0_0 00:17:11.640 17:27:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.640 17:27:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:11.640 17:27:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.640 17:27:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:11.640 17:27:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.640 17:27:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:11.640 Found net devices under 0000:af:00.1: cvl_0_1 00:17:11.640 17:27:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.640 17:27:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:11.640 17:27:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:11.640 17:27:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:11.640 17:27:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.640 17:27:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.640 17:27:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.640 17:27:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:11.640 17:27:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.640 17:27:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.640 17:27:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:11.640 17:27:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.640 17:27:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.640 17:27:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:11.640 17:27:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:11.640 17:27:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.640 17:27:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.640 17:27:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.640 17:27:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.640 17:27:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:11.640 17:27:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.640 17:27:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.640 17:27:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.640 17:27:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:11.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:17:11.640 00:17:11.640 --- 10.0.0.2 ping statistics --- 00:17:11.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.640 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:17:11.640 17:27:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:17:11.640 00:17:11.640 --- 10.0.0.1 ping statistics --- 00:17:11.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.640 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:11.640 17:27:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.640 17:27:49 -- nvmf/common.sh@410 -- # return 0 00:17:11.640 17:27:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:11.640 17:27:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.640 17:27:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:11.640 17:27:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.640 17:27:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:11.640 17:27:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:11.640 17:27:49 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:11.640 17:27:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:11.640 17:27:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:11.640 17:27:49 -- common/autotest_common.sh@10 -- # set +x 00:17:11.640 17:27:49 -- nvmf/common.sh@469 -- # nvmfpid=4090231 00:17:11.640 17:27:49 -- nvmf/common.sh@470 -- # waitforlisten 4090231 00:17:11.640 17:27:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:11.640 17:27:49 -- common/autotest_common.sh@819 -- # '[' -z 4090231 ']' 00:17:11.640 17:27:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.640 17:27:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:11.640 17:27:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.640 17:27:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:11.640 17:27:49 -- common/autotest_common.sh@10 -- # set +x 00:17:11.640 [2024-07-12 17:27:49.709242] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:11.640 [2024-07-12 17:27:49.709304] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.640 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.640 [2024-07-12 17:27:49.796772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.640 [2024-07-12 17:27:49.838543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:11.640 [2024-07-12 17:27:49.838692] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.640 [2024-07-12 17:27:49.838704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.640 [2024-07-12 17:27:49.838713] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.640 [2024-07-12 17:27:49.838772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.640 [2024-07-12 17:27:49.838873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.640 [2024-07-12 17:27:49.838875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.640 17:27:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.640 17:27:50 -- common/autotest_common.sh@852 -- # return 0 00:17:11.640 17:27:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:11.640 17:27:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:11.640 17:27:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.640 17:27:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.640 17:27:50 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:11.899 [2024-07-12 17:27:50.807773] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.899 17:27:50 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:12.158 17:27:51 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:12.158 17:27:51 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:12.416 17:27:51 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:12.416 17:27:51 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:12.675 17:27:51 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:12.933 17:27:51 -- target/nvmf_lvol.sh@29 -- # lvs=10df81f7-ed4c-4f8d-a051-665b49f98e57 00:17:12.933 17:27:51 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 10df81f7-ed4c-4f8d-a051-665b49f98e57 lvol 20 00:17:13.192 17:27:52 -- target/nvmf_lvol.sh@32 -- # lvol=b1a705e3-fe87-431b-91dc-e522af8966c1 00:17:13.192 17:27:52 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:13.451 17:27:52 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b1a705e3-fe87-431b-91dc-e522af8966c1 00:17:13.709 17:27:52 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:13.968 [2024-07-12 17:27:52.747385] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.968 17:27:52 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:14.227 17:27:53 -- target/nvmf_lvol.sh@42 -- # perf_pid=4090806 00:17:14.227 17:27:53 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:14.227 17:27:53 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:14.227 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.163 17:27:54 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b1a705e3-fe87-431b-91dc-e522af8966c1 MY_SNAPSHOT 00:17:15.422 17:27:54 -- target/nvmf_lvol.sh@47 -- # snapshot=c5e78369-e924-40ed-b368-1b404e7383e4 00:17:15.422 17:27:54 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b1a705e3-fe87-431b-91dc-e522af8966c1 30 00:17:15.681 17:27:54 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c5e78369-e924-40ed-b368-1b404e7383e4 MY_CLONE 00:17:15.939 17:27:54 -- target/nvmf_lvol.sh@49 -- # clone=6c582aef-3e8e-40d4-80fa-6026eff3124c 00:17:15.940 17:27:54 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6c582aef-3e8e-40d4-80fa-6026eff3124c 00:17:16.876 17:27:55 -- target/nvmf_lvol.sh@53 -- # wait 4090806 00:17:24.996 Initializing NVMe Controllers 00:17:24.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:24.996 Controller IO queue size 128, less than required. 00:17:24.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:24.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:24.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:24.996 Initialization complete. Launching workers. 00:17:24.996 ======================================================== 00:17:24.996 Latency(us) 00:17:24.996 Device Information : IOPS MiB/s Average min max 00:17:24.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9007.20 35.18 14211.92 972.70 109505.43 00:17:24.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8917.60 34.83 14361.45 3538.16 55068.51 00:17:24.996 ======================================================== 00:17:24.996 Total : 17924.79 70.02 14286.31 972.70 109505.43 00:17:24.996 00:17:24.996 17:28:03 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:24.996 17:28:03 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b1a705e3-fe87-431b-91dc-e522af8966c1 00:17:24.996 17:28:03 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10df81f7-ed4c-4f8d-a051-665b49f98e57 00:17:25.255 17:28:04 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:25.255 17:28:04 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:25.255 17:28:04 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:25.255 17:28:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:25.255 17:28:04 -- nvmf/common.sh@116 -- # sync 00:17:25.255 17:28:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:25.255 17:28:04 -- nvmf/common.sh@119 -- # set +e 00:17:25.255 17:28:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:25.255 17:28:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:25.255 rmmod nvme_tcp 00:17:25.255 rmmod nvme_fabrics 00:17:25.255 rmmod nvme_keyring 00:17:25.255 17:28:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:25.255 17:28:04 -- nvmf/common.sh@123 -- # set -e 00:17:25.255 17:28:04 -- nvmf/common.sh@124 -- # return 0 00:17:25.255 17:28:04 -- nvmf/common.sh@477 -- # '[' -n 4090231 ']' 00:17:25.255 17:28:04 -- nvmf/common.sh@478 -- # killprocess 4090231 00:17:25.255 17:28:04 -- common/autotest_common.sh@926 -- # '[' -z 4090231 ']' 00:17:25.255 17:28:04 -- common/autotest_common.sh@930 -- # kill -0 4090231 00:17:25.255 17:28:04 -- common/autotest_common.sh@931 -- # uname 00:17:25.255 17:28:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:25.255 17:28:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4090231 00:17:25.256 17:28:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:25.256 17:28:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:25.256 17:28:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4090231' 00:17:25.256 killing process with pid 4090231 00:17:25.256 17:28:04 -- common/autotest_common.sh@945 -- # kill 4090231 00:17:25.256 17:28:04 -- common/autotest_common.sh@950 -- # wait 4090231 00:17:25.513 17:28:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:25.513 17:28:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:25.513 17:28:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:25.513 17:28:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.513 17:28:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:25.513 17:28:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.513 17:28:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.513 17:28:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.047 17:28:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:28.047 00:17:28.047 real 0m22.596s 00:17:28.047 user 1m6.932s 00:17:28.047 sys 0m7.086s 00:17:28.047 17:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.047 17:28:06 -- common/autotest_common.sh@10 -- # set +x 00:17:28.047 ************************************ 00:17:28.047 END TEST nvmf_lvol 00:17:28.047 ************************************ 00:17:28.047 17:28:06 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:28.047 17:28:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:28.047 17:28:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:28.047 17:28:06 -- common/autotest_common.sh@10 -- # set +x 00:17:28.047 ************************************ 00:17:28.047 START TEST nvmf_lvs_grow 00:17:28.047 ************************************ 00:17:28.047 17:28:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:28.047 * Looking for test storage... 00:17:28.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.047 17:28:06 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.047 17:28:06 -- nvmf/common.sh@7 -- # uname -s 00:17:28.047 17:28:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.047 17:28:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.047 17:28:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.047 17:28:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.047 17:28:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.047 17:28:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.047 17:28:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.047 17:28:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.047 17:28:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.047 17:28:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.047 17:28:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:28.047 17:28:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:28.047 17:28:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.047 17:28:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.047 17:28:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.047 17:28:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.047 17:28:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.047 17:28:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.047 17:28:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.047 17:28:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.047 17:28:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.047 17:28:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.047 17:28:06 -- paths/export.sh@5 -- # export PATH 00:17:28.047 17:28:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.047 17:28:06 -- nvmf/common.sh@46 -- # : 0 00:17:28.047 17:28:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:28.047 17:28:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:28.047 17:28:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:28.047 17:28:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.047 17:28:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.047 17:28:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:28.047 17:28:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:28.047 17:28:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:28.047 17:28:06 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.047 17:28:06 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.047 17:28:06 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:28.047 17:28:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:28.047 17:28:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.047 17:28:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:28.047 17:28:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:28.047 17:28:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:28.047 17:28:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.047 17:28:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.047 17:28:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.047 17:28:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:28.047 17:28:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:28.047 17:28:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:28.047 17:28:06 -- common/autotest_common.sh@10 -- # set +x 00:17:33.317 17:28:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:33.317 17:28:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:33.317 17:28:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:33.317 17:28:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:33.317 17:28:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:33.317 17:28:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:33.317 17:28:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:33.317 17:28:11 -- nvmf/common.sh@294 -- # net_devs=() 00:17:33.317 17:28:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:33.317 17:28:11 -- nvmf/common.sh@295 -- # e810=() 00:17:33.317 17:28:11 -- nvmf/common.sh@295 -- # local -ga e810 00:17:33.317 17:28:11 -- nvmf/common.sh@296 -- # x722=() 00:17:33.317 17:28:11 -- nvmf/common.sh@296 -- # local -ga x722 00:17:33.317 17:28:11 -- nvmf/common.sh@297 -- # mlx=() 00:17:33.317 17:28:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:33.317 17:28:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.317 17:28:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:33.317 17:28:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:33.317 17:28:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:33.317 17:28:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:33.317 17:28:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:33.317 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:33.317 17:28:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:33.317 17:28:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:33.317 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:33.317 17:28:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:33.317 17:28:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:33.317 17:28:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.317 17:28:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:33.317 17:28:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.317 17:28:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:33.317 Found net devices under 0000:af:00.0: cvl_0_0 00:17:33.317 17:28:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.317 17:28:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:33.317 17:28:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.317 17:28:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:33.317 17:28:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.317 17:28:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:33.317 Found net devices under 0000:af:00.1: cvl_0_1 00:17:33.317 17:28:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.317 17:28:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:33.317 17:28:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:33.317 17:28:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:33.317 17:28:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:33.317 17:28:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.317 17:28:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.317 17:28:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.317 17:28:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:33.317 17:28:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.317 17:28:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.317 17:28:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:33.317 17:28:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.317 17:28:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.317 17:28:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:33.317 17:28:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:33.317 17:28:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.317 17:28:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.317 17:28:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.317 17:28:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.317 17:28:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:33.317 17:28:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.317 17:28:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.318 17:28:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.318 17:28:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:33.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:17:33.318 00:17:33.318 --- 10.0.0.2 ping statistics --- 00:17:33.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.318 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:17:33.318 17:28:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:17:33.318 00:17:33.318 --- 10.0.0.1 ping statistics --- 00:17:33.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.318 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:33.318 17:28:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.318 17:28:12 -- nvmf/common.sh@410 -- # return 0 00:17:33.318 17:28:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:33.318 17:28:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.318 17:28:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:33.318 17:28:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:33.318 17:28:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.318 17:28:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:33.318 17:28:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:33.318 17:28:12 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:33.318 17:28:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.318 17:28:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:33.318 17:28:12 -- common/autotest_common.sh@10 -- # set +x 00:17:33.318 17:28:12 -- nvmf/common.sh@469 -- # nvmfpid=4096403 00:17:33.318 17:28:12 -- nvmf/common.sh@470 -- # waitforlisten 4096403 00:17:33.318 17:28:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:33.318 17:28:12 -- common/autotest_common.sh@819 -- # '[' -z 4096403 ']' 00:17:33.318 17:28:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.318 17:28:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:33.318 17:28:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.318 17:28:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:33.318 17:28:12 -- common/autotest_common.sh@10 -- # set +x 00:17:33.318 [2024-07-12 17:28:12.240688] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:33.318 [2024-07-12 17:28:12.240741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.318 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.576 [2024-07-12 17:28:12.327453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.576 [2024-07-12 17:28:12.368988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.576 [2024-07-12 17:28:12.369134] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.576 [2024-07-12 17:28:12.369146] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.576 [2024-07-12 17:28:12.369155] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.576 [2024-07-12 17:28:12.369183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.510 17:28:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.510 17:28:13 -- common/autotest_common.sh@852 -- # return 0 00:17:34.510 17:28:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.510 17:28:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:34.510 17:28:13 -- common/autotest_common.sh@10 -- # set +x 00:17:34.510 17:28:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:34.510 [2024-07-12 17:28:13.417745] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:34.510 17:28:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:34.510 17:28:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:34.510 17:28:13 -- common/autotest_common.sh@10 -- # set +x 00:17:34.510 ************************************ 00:17:34.510 START TEST lvs_grow_clean 00:17:34.510 ************************************ 00:17:34.510 17:28:13 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:34.510 17:28:13 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:34.768 17:28:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:34.768 17:28:13 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:35.026 17:28:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:35.026 17:28:13 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:35.026 17:28:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:35.283 17:28:14 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:35.283 17:28:14 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:35.283 17:28:14 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 lvol 150 00:17:35.541 17:28:14 -- target/nvmf_lvs_grow.sh@33 -- # lvol=03c91637-0721-49db-87b6-3b858b8fe968 00:17:35.541 17:28:14 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:35.541 17:28:14 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:35.799 [2024-07-12 17:28:14.624766] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:35.799 [2024-07-12 17:28:14.624830] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:35.799 true 00:17:35.799 17:28:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:35.799 17:28:14 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:36.057 17:28:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:36.057 17:28:14 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:36.315 17:28:15 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 03c91637-0721-49db-87b6-3b858b8fe968 00:17:36.574 17:28:15 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:36.833 [2024-07-12 17:28:15.543679] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.833 17:28:15 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:36.833 17:28:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4097021 00:17:36.833 17:28:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.833 17:28:15 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:36.833 17:28:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4097021 /var/tmp/bdevperf.sock 00:17:37.092 17:28:15 -- common/autotest_common.sh@819 -- # '[' -z 4097021 ']' 00:17:37.092 17:28:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.092 17:28:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:37.092 17:28:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.092 17:28:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:37.092 17:28:15 -- common/autotest_common.sh@10 -- # set +x 00:17:37.092 [2024-07-12 17:28:15.845819] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:37.092 [2024-07-12 17:28:15.845878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097021 ] 00:17:37.092 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.092 [2024-07-12 17:28:15.916818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.092 [2024-07-12 17:28:15.958607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.028 17:28:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.028 17:28:16 -- common/autotest_common.sh@852 -- # return 0 00:17:38.028 17:28:16 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:38.286 Nvme0n1 00:17:38.286 17:28:17 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:38.545 [ 00:17:38.545 { 00:17:38.545 "name": "Nvme0n1", 00:17:38.545 "aliases": [ 00:17:38.545 "03c91637-0721-49db-87b6-3b858b8fe968" 00:17:38.545 ], 00:17:38.545 "product_name": "NVMe disk", 00:17:38.545 "block_size": 4096, 00:17:38.545 "num_blocks": 38912, 00:17:38.545 "uuid": "03c91637-0721-49db-87b6-3b858b8fe968", 00:17:38.545 "assigned_rate_limits": { 00:17:38.545 "rw_ios_per_sec": 0, 00:17:38.545 "rw_mbytes_per_sec": 0, 00:17:38.545 "r_mbytes_per_sec": 0, 00:17:38.545 "w_mbytes_per_sec": 0 00:17:38.545 }, 00:17:38.545 "claimed": false, 00:17:38.545 "zoned": false, 00:17:38.545 "supported_io_types": { 00:17:38.546 "read": true, 00:17:38.546 "write": true, 00:17:38.546 "unmap": true, 00:17:38.546 "write_zeroes": true, 00:17:38.546 "flush": true, 00:17:38.546 "reset": true, 00:17:38.546 "compare": true, 00:17:38.546 "compare_and_write": true, 00:17:38.546 "abort": true, 00:17:38.546 "nvme_admin": true, 00:17:38.546 "nvme_io": true 00:17:38.546 }, 00:17:38.546 "driver_specific": { 00:17:38.546 "nvme": [ 00:17:38.546 { 00:17:38.546 "trid": { 00:17:38.546 "trtype": "TCP", 00:17:38.546 "adrfam": "IPv4", 00:17:38.546 "traddr": "10.0.0.2", 00:17:38.546 "trsvcid": "4420", 00:17:38.546 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:38.546 }, 00:17:38.546 "ctrlr_data": { 00:17:38.546 "cntlid": 1, 00:17:38.546 "vendor_id": "0x8086", 00:17:38.546 "model_number": "SPDK bdev Controller", 00:17:38.546 "serial_number": "SPDK0", 00:17:38.546 "firmware_revision": "24.01.1", 00:17:38.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:38.546 "oacs": { 00:17:38.546 "security": 0, 00:17:38.546 "format": 0, 00:17:38.546 "firmware": 0, 00:17:38.546 "ns_manage": 0 00:17:38.546 }, 00:17:38.546 "multi_ctrlr": true, 00:17:38.546 "ana_reporting": false 00:17:38.546 }, 00:17:38.546 "vs": { 00:17:38.546 "nvme_version": "1.3" 00:17:38.546 }, 00:17:38.546 "ns_data": { 00:17:38.546 "id": 1, 00:17:38.546 "can_share": true 00:17:38.546 } 00:17:38.546 } 00:17:38.546 ], 00:17:38.546 "mp_policy": "active_passive" 00:17:38.546 } 00:17:38.546 } 00:17:38.546 ] 00:17:38.546 17:28:17 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.546 17:28:17 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4097357 00:17:38.546 17:28:17 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:38.546 Running I/O for 10 seconds... 00:17:39.924 Latency(us) 00:17:39.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.924 Nvme0n1 : 1.00 15569.00 60.82 0.00 0.00 0.00 0.00 0.00 00:17:39.924 =================================================================================================================== 00:17:39.924 Total : 15569.00 60.82 0.00 0.00 0.00 0.00 0.00 00:17:39.924 00:17:40.489 17:28:19 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:40.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.747 Nvme0n1 : 2.00 15658.00 61.16 0.00 0.00 0.00 0.00 0.00 00:17:40.747 =================================================================================================================== 00:17:40.747 Total : 15658.00 61.16 0.00 0.00 0.00 0.00 0.00 00:17:40.747 00:17:40.747 true 00:17:40.747 17:28:19 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:40.747 17:28:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:41.005 17:28:19 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:41.005 17:28:19 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:41.005 17:28:19 -- target/nvmf_lvs_grow.sh@65 -- # wait 4097357 00:17:41.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.571 Nvme0n1 : 3.00 15730.33 61.45 0.00 0.00 0.00 0.00 0.00 00:17:41.571 =================================================================================================================== 00:17:41.571 Total : 15730.33 61.45 0.00 0.00 0.00 0.00 0.00 00:17:41.571 00:17:42.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.967 Nvme0n1 : 4.00 15761.75 61.57 0.00 0.00 0.00 0.00 0.00 00:17:42.967 =================================================================================================================== 00:17:42.967 Total : 15761.75 61.57 0.00 0.00 0.00 0.00 0.00 00:17:42.967 00:17:43.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.598 Nvme0n1 : 5.00 15784.40 61.66 0.00 0.00 0.00 0.00 0.00 00:17:43.598 =================================================================================================================== 00:17:43.598 Total : 15784.40 61.66 0.00 0.00 0.00 0.00 0.00 00:17:43.598 00:17:44.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.533 Nvme0n1 : 6.00 15798.67 61.71 0.00 0.00 0.00 0.00 0.00 00:17:44.533 =================================================================================================================== 00:17:44.533 Total : 15798.67 61.71 0.00 0.00 0.00 0.00 0.00 00:17:44.533 00:17:45.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.910 Nvme0n1 : 7.00 15823.00 61.81 0.00 0.00 0.00 0.00 0.00 00:17:45.910 =================================================================================================================== 00:17:45.910 Total : 15823.00 61.81 0.00 0.00 0.00 0.00 0.00 00:17:45.910 00:17:46.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.848 Nvme0n1 : 8.00 15836.75 61.86 0.00 0.00 0.00 0.00 0.00 00:17:46.848 =================================================================================================================== 00:17:46.848 Total : 15836.75 61.86 0.00 0.00 0.00 0.00 0.00 00:17:46.848 00:17:47.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.784 Nvme0n1 : 9.00 15837.00 61.86 0.00 0.00 0.00 0.00 0.00 00:17:47.784 =================================================================================================================== 00:17:47.784 Total : 15837.00 61.86 0.00 0.00 0.00 0.00 0.00 00:17:47.784 00:17:48.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.720 Nvme0n1 : 10.00 15848.20 61.91 0.00 0.00 0.00 0.00 0.00 00:17:48.720 =================================================================================================================== 00:17:48.720 Total : 15848.20 61.91 0.00 0.00 0.00 0.00 0.00 00:17:48.720 00:17:48.720 00:17:48.720 Latency(us) 00:17:48.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.720 Nvme0n1 : 10.01 15848.53 61.91 0.00 0.00 8072.25 4825.83 15728.64 00:17:48.720 =================================================================================================================== 00:17:48.720 Total : 15848.53 61.91 0.00 0.00 8072.25 4825.83 15728.64 00:17:48.720 0 00:17:48.720 17:28:27 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4097021 00:17:48.720 17:28:27 -- common/autotest_common.sh@926 -- # '[' -z 4097021 ']' 00:17:48.720 17:28:27 -- common/autotest_common.sh@930 -- # kill -0 4097021 00:17:48.720 17:28:27 -- common/autotest_common.sh@931 -- # uname 00:17:48.720 17:28:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:48.720 17:28:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4097021 00:17:48.720 17:28:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:48.720 17:28:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:48.720 17:28:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4097021' 00:17:48.720 killing process with pid 4097021 00:17:48.720 17:28:27 -- common/autotest_common.sh@945 -- # kill 4097021 00:17:48.720 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.720 00:17:48.720 Latency(us) 00:17:48.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.720 =================================================================================================================== 00:17:48.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.720 17:28:27 -- common/autotest_common.sh@950 -- # wait 4097021 00:17:48.979 17:28:27 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:49.239 17:28:28 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:49.239 17:28:28 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:49.498 17:28:28 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:49.498 17:28:28 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:49.498 17:28:28 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:49.757 [2024-07-12 17:28:28.470261] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:49.757 17:28:28 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:49.757 17:28:28 -- common/autotest_common.sh@640 -- # local es=0 00:17:49.757 17:28:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:49.757 17:28:28 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.757 17:28:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:49.757 17:28:28 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.757 17:28:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:49.757 17:28:28 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.757 17:28:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:49.757 17:28:28 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.757 17:28:28 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:49.757 17:28:28 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:50.015 request: 00:17:50.015 { 00:17:50.015 "uuid": "201897ac-a42d-4721-8d2f-c9661a8b13f2", 00:17:50.015 "method": "bdev_lvol_get_lvstores", 00:17:50.015 "req_id": 1 00:17:50.015 } 00:17:50.015 Got JSON-RPC error response 00:17:50.015 response: 00:17:50.015 { 00:17:50.015 "code": -19, 00:17:50.015 "message": "No such device" 00:17:50.015 } 00:17:50.015 17:28:28 -- common/autotest_common.sh@643 -- # es=1 00:17:50.015 17:28:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:50.015 17:28:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:50.015 17:28:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:50.015 17:28:28 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:50.015 aio_bdev 00:17:50.274 17:28:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 03c91637-0721-49db-87b6-3b858b8fe968 00:17:50.274 17:28:28 -- common/autotest_common.sh@887 -- # local bdev_name=03c91637-0721-49db-87b6-3b858b8fe968 00:17:50.274 17:28:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:50.274 17:28:28 -- common/autotest_common.sh@889 -- # local i 00:17:50.274 17:28:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:50.274 17:28:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:50.274 17:28:28 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:50.274 17:28:29 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 03c91637-0721-49db-87b6-3b858b8fe968 -t 2000 00:17:50.533 [ 00:17:50.533 { 00:17:50.533 "name": "03c91637-0721-49db-87b6-3b858b8fe968", 00:17:50.533 "aliases": [ 00:17:50.533 "lvs/lvol" 00:17:50.533 ], 00:17:50.533 "product_name": "Logical Volume", 00:17:50.533 "block_size": 4096, 00:17:50.533 "num_blocks": 38912, 00:17:50.533 "uuid": "03c91637-0721-49db-87b6-3b858b8fe968", 00:17:50.533 "assigned_rate_limits": { 00:17:50.533 "rw_ios_per_sec": 0, 00:17:50.533 "rw_mbytes_per_sec": 0, 00:17:50.533 "r_mbytes_per_sec": 0, 00:17:50.533 "w_mbytes_per_sec": 0 00:17:50.533 }, 00:17:50.533 "claimed": false, 00:17:50.533 "zoned": false, 00:17:50.533 "supported_io_types": { 00:17:50.533 "read": true, 00:17:50.533 "write": true, 00:17:50.533 "unmap": true, 00:17:50.533 "write_zeroes": true, 00:17:50.533 "flush": false, 00:17:50.533 "reset": true, 00:17:50.533 "compare": false, 00:17:50.533 "compare_and_write": false, 00:17:50.533 "abort": false, 00:17:50.533 "nvme_admin": false, 00:17:50.533 "nvme_io": false 00:17:50.533 }, 00:17:50.533 "driver_specific": { 00:17:50.533 "lvol": { 00:17:50.533 "lvol_store_uuid": "201897ac-a42d-4721-8d2f-c9661a8b13f2", 00:17:50.533 "base_bdev": "aio_bdev", 00:17:50.533 "thin_provision": false, 00:17:50.533 "snapshot": false, 00:17:50.533 "clone": false, 00:17:50.533 "esnap_clone": false 00:17:50.533 } 00:17:50.533 } 00:17:50.533 } 00:17:50.533 ] 00:17:50.533 17:28:29 -- common/autotest_common.sh@895 -- # return 0 00:17:50.533 17:28:29 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:50.533 17:28:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:50.793 17:28:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:50.793 17:28:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:50.793 17:28:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:51.051 17:28:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:51.051 17:28:29 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 03c91637-0721-49db-87b6-3b858b8fe968 00:17:51.310 17:28:30 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 201897ac-a42d-4721-8d2f-c9661a8b13f2 00:17:51.569 17:28:30 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.828 00:17:51.828 real 0m17.199s 00:17:51.828 user 0m17.261s 00:17:51.828 sys 0m1.389s 00:17:51.828 17:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.828 17:28:30 -- common/autotest_common.sh@10 -- # set +x 00:17:51.828 ************************************ 00:17:51.828 END TEST lvs_grow_clean 00:17:51.828 ************************************ 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:51.828 17:28:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:51.828 17:28:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.828 17:28:30 -- common/autotest_common.sh@10 -- # set +x 00:17:51.828 ************************************ 00:17:51.828 START TEST lvs_grow_dirty 00:17:51.828 ************************************ 00:17:51.828 17:28:30 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.828 17:28:30 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:52.086 17:28:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:52.086 17:28:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:52.345 17:28:31 -- target/nvmf_lvs_grow.sh@28 -- # lvs=08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:17:52.345 17:28:31 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:17:52.345 17:28:31 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:52.604 17:28:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:52.604 17:28:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:52.604 17:28:31 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f lvol 150 00:17:52.863 17:28:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4502fc9-4270-4f05-a050-c00b6560a295 00:17:52.863 17:28:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:52.863 17:28:31 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:53.122 [2024-07-12 17:28:31.879845] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:53.122 [2024-07-12 17:28:31.879909] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:53.122 true 00:17:53.122 17:28:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:17:53.122 17:28:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:53.382 17:28:32 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:53.382 17:28:32 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:53.639 17:28:32 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4502fc9-4270-4f05-a050-c00b6560a295 00:17:53.897 17:28:32 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:53.897 17:28:32 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:54.154 17:28:33 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:54.154 17:28:33 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4100232 00:17:54.154 17:28:33 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.154 17:28:33 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4100232 /var/tmp/bdevperf.sock 00:17:54.154 17:28:33 -- common/autotest_common.sh@819 -- # '[' -z 4100232 ']' 00:17:54.154 17:28:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.154 17:28:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:54.154 17:28:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.154 17:28:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:54.154 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:17:54.154 [2024-07-12 17:28:33.108752] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:54.154 [2024-07-12 17:28:33.108810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100232 ] 00:17:54.412 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.412 [2024-07-12 17:28:33.179159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.412 [2024-07-12 17:28:33.221868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.412 17:28:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:54.412 17:28:33 -- common/autotest_common.sh@852 -- # return 0 00:17:54.412 17:28:33 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:54.980 Nvme0n1 00:17:54.980 17:28:33 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:55.240 [ 00:17:55.240 { 00:17:55.240 "name": "Nvme0n1", 00:17:55.240 "aliases": [ 00:17:55.240 "a4502fc9-4270-4f05-a050-c00b6560a295" 00:17:55.240 ], 00:17:55.240 "product_name": "NVMe disk", 00:17:55.240 "block_size": 4096, 00:17:55.240 "num_blocks": 38912, 00:17:55.240 "uuid": "a4502fc9-4270-4f05-a050-c00b6560a295", 00:17:55.240 "assigned_rate_limits": { 00:17:55.240 "rw_ios_per_sec": 0, 00:17:55.240 "rw_mbytes_per_sec": 0, 00:17:55.240 "r_mbytes_per_sec": 0, 00:17:55.240 "w_mbytes_per_sec": 0 00:17:55.240 }, 00:17:55.240 "claimed": false, 00:17:55.240 "zoned": false, 00:17:55.240 "supported_io_types": { 00:17:55.240 "read": true, 00:17:55.240 "write": true, 00:17:55.240 "unmap": true, 00:17:55.240 "write_zeroes": true, 00:17:55.240 "flush": true, 00:17:55.240 "reset": true, 00:17:55.240 "compare": true, 00:17:55.240 "compare_and_write": true, 00:17:55.240 "abort": true, 00:17:55.240 "nvme_admin": true, 00:17:55.240 "nvme_io": true 00:17:55.240 }, 00:17:55.240 "driver_specific": { 00:17:55.240 "nvme": [ 00:17:55.240 { 00:17:55.240 "trid": { 00:17:55.240 "trtype": "TCP", 00:17:55.240 "adrfam": "IPv4", 00:17:55.240 "traddr": "10.0.0.2", 00:17:55.240 "trsvcid": "4420", 00:17:55.240 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:55.240 }, 00:17:55.240 "ctrlr_data": { 00:17:55.240 "cntlid": 1, 00:17:55.240 "vendor_id": "0x8086", 00:17:55.240 "model_number": "SPDK bdev Controller", 00:17:55.240 "serial_number": "SPDK0", 00:17:55.240 "firmware_revision": "24.01.1", 00:17:55.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:55.240 "oacs": { 00:17:55.240 "security": 0, 00:17:55.240 "format": 0, 00:17:55.240 "firmware": 0, 00:17:55.240 "ns_manage": 0 00:17:55.240 }, 00:17:55.240 "multi_ctrlr": true, 00:17:55.240 "ana_reporting": false 00:17:55.240 }, 00:17:55.240 "vs": { 00:17:55.240 "nvme_version": "1.3" 00:17:55.240 }, 00:17:55.240 "ns_data": { 00:17:55.240 "id": 1, 00:17:55.240 "can_share": true 00:17:55.240 } 00:17:55.240 } 00:17:55.240 ], 00:17:55.240 "mp_policy": "active_passive" 00:17:55.240 } 00:17:55.240 } 00:17:55.240 ] 00:17:55.240 17:28:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4100494 00:17:55.240 17:28:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:55.240 17:28:34 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.240 Running I/O for 10 seconds... 00:17:56.619 Latency(us) 00:17:56.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.619 Nvme0n1 : 1.00 15631.00 61.06 0.00 0.00 0.00 0.00 0.00 00:17:56.619 =================================================================================================================== 00:17:56.619 Total : 15631.00 61.06 0.00 0.00 0.00 0.00 0.00 00:17:56.619 00:17:57.187 17:28:36 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:17:57.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.448 Nvme0n1 : 2.00 15736.50 61.47 0.00 0.00 0.00 0.00 0.00 00:17:57.448 =================================================================================================================== 00:17:57.448 Total : 15736.50 61.47 0.00 0.00 0.00 0.00 0.00 00:17:57.448 00:17:57.448 true 00:17:57.448 17:28:36 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:17:57.448 17:28:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:57.706 17:28:36 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:57.706 17:28:36 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:57.706 17:28:36 -- target/nvmf_lvs_grow.sh@65 -- # wait 4100494 00:17:58.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.274 Nvme0n1 : 3.00 15782.67 61.65 0.00 0.00 0.00 0.00 0.00 00:17:58.274 =================================================================================================================== 00:17:58.274 Total : 15782.67 61.65 0.00 0.00 0.00 0.00 0.00 00:17:58.274 00:17:59.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.212 Nvme0n1 : 4.00 15809.25 61.75 0.00 0.00 0.00 0.00 0.00 00:17:59.212 =================================================================================================================== 00:17:59.212 Total : 15809.25 61.75 0.00 0.00 0.00 0.00 0.00 00:17:59.212 00:18:00.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.588 Nvme0n1 : 5.00 15842.00 61.88 0.00 0.00 0.00 0.00 0.00 00:18:00.588 =================================================================================================================== 00:18:00.588 Total : 15842.00 61.88 0.00 0.00 0.00 0.00 0.00 00:18:00.588 00:18:01.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.526 Nvme0n1 : 6.00 15859.83 61.95 0.00 0.00 0.00 0.00 0.00 00:18:01.526 =================================================================================================================== 00:18:01.526 Total : 15859.83 61.95 0.00 0.00 0.00 0.00 0.00 00:18:01.526 00:18:02.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.462 Nvme0n1 : 7.00 15872.57 62.00 0.00 0.00 0.00 0.00 0.00 00:18:02.462 =================================================================================================================== 00:18:02.462 Total : 15872.57 62.00 0.00 0.00 0.00 0.00 0.00 00:18:02.462 00:18:03.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.395 Nvme0n1 : 8.00 15880.75 62.03 0.00 0.00 0.00 0.00 0.00 00:18:03.395 =================================================================================================================== 00:18:03.395 Total : 15880.75 62.03 0.00 0.00 0.00 0.00 0.00 00:18:03.395 00:18:04.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.329 Nvme0n1 : 9.00 15888.11 62.06 0.00 0.00 0.00 0.00 0.00 00:18:04.329 =================================================================================================================== 00:18:04.329 Total : 15888.11 62.06 0.00 0.00 0.00 0.00 0.00 00:18:04.329 00:18:05.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.265 Nvme0n1 : 10.00 15900.70 62.11 0.00 0.00 0.00 0.00 0.00 00:18:05.265 =================================================================================================================== 00:18:05.265 Total : 15900.70 62.11 0.00 0.00 0.00 0.00 0.00 00:18:05.265 00:18:05.265 00:18:05.265 Latency(us) 00:18:05.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.265 Nvme0n1 : 10.01 15903.59 62.12 0.00 0.00 8043.17 4319.42 16324.42 00:18:05.265 =================================================================================================================== 00:18:05.265 Total : 15903.59 62.12 0.00 0.00 8043.17 4319.42 16324.42 00:18:05.265 0 00:18:05.265 17:28:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4100232 00:18:05.265 17:28:44 -- common/autotest_common.sh@926 -- # '[' -z 4100232 ']' 00:18:05.265 17:28:44 -- common/autotest_common.sh@930 -- # kill -0 4100232 00:18:05.265 17:28:44 -- common/autotest_common.sh@931 -- # uname 00:18:05.265 17:28:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:05.265 17:28:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4100232 00:18:05.523 17:28:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:05.523 17:28:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:05.523 17:28:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4100232' 00:18:05.523 killing process with pid 4100232 00:18:05.523 17:28:44 -- common/autotest_common.sh@945 -- # kill 4100232 00:18:05.523 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.523 00:18:05.523 Latency(us) 00:18:05.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.523 =================================================================================================================== 00:18:05.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.523 17:28:44 -- common/autotest_common.sh@950 -- # wait 4100232 00:18:05.523 17:28:44 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:05.782 17:28:44 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:05.782 17:28:44 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:06.040 17:28:44 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:06.040 17:28:44 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:06.040 17:28:44 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 4096403 00:18:06.040 17:28:44 -- target/nvmf_lvs_grow.sh@74 -- # wait 4096403 00:18:06.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 4096403 Killed "${NVMF_APP[@]}" "$@" 00:18:06.040 17:28:44 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:06.040 17:28:44 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:06.040 17:28:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:06.040 17:28:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:06.040 17:28:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.040 17:28:44 -- nvmf/common.sh@469 -- # nvmfpid=4102382 00:18:06.040 17:28:44 -- nvmf/common.sh@470 -- # waitforlisten 4102382 00:18:06.040 17:28:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:06.040 17:28:44 -- common/autotest_common.sh@819 -- # '[' -z 4102382 ']' 00:18:06.040 17:28:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.040 17:28:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:06.040 17:28:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.040 17:28:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:06.040 17:28:44 -- common/autotest_common.sh@10 -- # set +x 00:18:06.040 [2024-07-12 17:28:44.884123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:06.040 [2024-07-12 17:28:44.884183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.040 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.040 [2024-07-12 17:28:44.972536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.298 [2024-07-12 17:28:45.013369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:06.298 [2024-07-12 17:28:45.013506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.298 [2024-07-12 17:28:45.013517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.298 [2024-07-12 17:28:45.013531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.299 [2024-07-12 17:28:45.013558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.865 17:28:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:06.865 17:28:45 -- common/autotest_common.sh@852 -- # return 0 00:18:06.865 17:28:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:06.865 17:28:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:06.865 17:28:45 -- common/autotest_common.sh@10 -- # set +x 00:18:06.865 17:28:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.865 17:28:45 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:07.124 [2024-07-12 17:28:46.051675] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:07.124 [2024-07-12 17:28:46.051781] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:07.124 [2024-07-12 17:28:46.051819] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:07.124 17:28:46 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:07.124 17:28:46 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev a4502fc9-4270-4f05-a050-c00b6560a295 00:18:07.124 17:28:46 -- common/autotest_common.sh@887 -- # local bdev_name=a4502fc9-4270-4f05-a050-c00b6560a295 00:18:07.124 17:28:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:07.124 17:28:46 -- common/autotest_common.sh@889 -- # local i 00:18:07.124 17:28:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:07.124 17:28:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:07.124 17:28:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:07.382 17:28:46 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4502fc9-4270-4f05-a050-c00b6560a295 -t 2000 00:18:07.641 [ 00:18:07.641 { 00:18:07.641 "name": "a4502fc9-4270-4f05-a050-c00b6560a295", 00:18:07.641 "aliases": [ 00:18:07.641 "lvs/lvol" 00:18:07.641 ], 00:18:07.641 "product_name": "Logical Volume", 00:18:07.641 "block_size": 4096, 00:18:07.641 "num_blocks": 38912, 00:18:07.641 "uuid": "a4502fc9-4270-4f05-a050-c00b6560a295", 00:18:07.641 "assigned_rate_limits": { 00:18:07.641 "rw_ios_per_sec": 0, 00:18:07.641 "rw_mbytes_per_sec": 0, 00:18:07.641 "r_mbytes_per_sec": 0, 00:18:07.641 "w_mbytes_per_sec": 0 00:18:07.641 }, 00:18:07.641 "claimed": false, 00:18:07.641 "zoned": false, 00:18:07.641 "supported_io_types": { 00:18:07.641 "read": true, 00:18:07.641 "write": true, 00:18:07.641 "unmap": true, 00:18:07.641 "write_zeroes": true, 00:18:07.641 "flush": false, 00:18:07.641 "reset": true, 00:18:07.641 "compare": false, 00:18:07.641 "compare_and_write": false, 00:18:07.641 "abort": false, 00:18:07.641 "nvme_admin": false, 00:18:07.641 "nvme_io": false 00:18:07.641 }, 00:18:07.641 "driver_specific": { 00:18:07.641 "lvol": { 00:18:07.641 "lvol_store_uuid": "08d18182-9c0f-4dbb-8af0-0050e6d4726f", 00:18:07.641 "base_bdev": "aio_bdev", 00:18:07.641 "thin_provision": false, 00:18:07.641 "snapshot": false, 00:18:07.641 "clone": false, 00:18:07.641 "esnap_clone": false 00:18:07.641 } 00:18:07.641 } 00:18:07.641 } 00:18:07.641 ] 00:18:07.641 17:28:46 -- common/autotest_common.sh@895 -- # return 0 00:18:07.641 17:28:46 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:07.641 17:28:46 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:07.900 17:28:46 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:07.900 17:28:46 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:07.900 17:28:46 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:08.158 17:28:46 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:08.158 17:28:46 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:08.416 [2024-07-12 17:28:47.204396] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:08.416 17:28:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:08.416 17:28:47 -- common/autotest_common.sh@640 -- # local es=0 00:18:08.416 17:28:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:08.416 17:28:47 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.416 17:28:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.416 17:28:47 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.416 17:28:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.416 17:28:47 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.416 17:28:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.416 17:28:47 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.416 17:28:47 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:08.416 17:28:47 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:08.675 request: 00:18:08.675 { 00:18:08.675 "uuid": "08d18182-9c0f-4dbb-8af0-0050e6d4726f", 00:18:08.675 "method": "bdev_lvol_get_lvstores", 00:18:08.675 "req_id": 1 00:18:08.675 } 00:18:08.675 Got JSON-RPC error response 00:18:08.675 response: 00:18:08.675 { 00:18:08.675 "code": -19, 00:18:08.675 "message": "No such device" 00:18:08.675 } 00:18:08.675 17:28:47 -- common/autotest_common.sh@643 -- # es=1 00:18:08.675 17:28:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:08.675 17:28:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:08.675 17:28:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:08.675 17:28:47 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:08.934 aio_bdev 00:18:08.934 17:28:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a4502fc9-4270-4f05-a050-c00b6560a295 00:18:08.934 17:28:47 -- common/autotest_common.sh@887 -- # local bdev_name=a4502fc9-4270-4f05-a050-c00b6560a295 00:18:08.934 17:28:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:08.934 17:28:47 -- common/autotest_common.sh@889 -- # local i 00:18:08.934 17:28:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:08.934 17:28:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:08.934 17:28:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:09.192 17:28:47 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4502fc9-4270-4f05-a050-c00b6560a295 -t 2000 00:18:09.450 [ 00:18:09.450 { 00:18:09.450 "name": "a4502fc9-4270-4f05-a050-c00b6560a295", 00:18:09.450 "aliases": [ 00:18:09.450 "lvs/lvol" 00:18:09.450 ], 00:18:09.450 "product_name": "Logical Volume", 00:18:09.450 "block_size": 4096, 00:18:09.450 "num_blocks": 38912, 00:18:09.450 "uuid": "a4502fc9-4270-4f05-a050-c00b6560a295", 00:18:09.450 "assigned_rate_limits": { 00:18:09.450 "rw_ios_per_sec": 0, 00:18:09.450 "rw_mbytes_per_sec": 0, 00:18:09.450 "r_mbytes_per_sec": 0, 00:18:09.450 "w_mbytes_per_sec": 0 00:18:09.450 }, 00:18:09.450 "claimed": false, 00:18:09.450 "zoned": false, 00:18:09.450 "supported_io_types": { 00:18:09.450 "read": true, 00:18:09.450 "write": true, 00:18:09.450 "unmap": true, 00:18:09.450 "write_zeroes": true, 00:18:09.450 "flush": false, 00:18:09.450 "reset": true, 00:18:09.450 "compare": false, 00:18:09.450 "compare_and_write": false, 00:18:09.450 "abort": false, 00:18:09.450 "nvme_admin": false, 00:18:09.450 "nvme_io": false 00:18:09.450 }, 00:18:09.450 "driver_specific": { 00:18:09.450 "lvol": { 00:18:09.450 "lvol_store_uuid": "08d18182-9c0f-4dbb-8af0-0050e6d4726f", 00:18:09.450 "base_bdev": "aio_bdev", 00:18:09.450 "thin_provision": false, 00:18:09.450 "snapshot": false, 00:18:09.450 "clone": false, 00:18:09.450 "esnap_clone": false 00:18:09.450 } 00:18:09.450 } 00:18:09.450 } 00:18:09.450 ] 00:18:09.450 17:28:48 -- common/autotest_common.sh@895 -- # return 0 00:18:09.450 17:28:48 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:09.450 17:28:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:09.450 17:28:48 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:09.450 17:28:48 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:09.451 17:28:48 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:09.709 17:28:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:09.709 17:28:48 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4502fc9-4270-4f05-a050-c00b6560a295 00:18:09.968 17:28:48 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08d18182-9c0f-4dbb-8af0-0050e6d4726f 00:18:10.226 17:28:49 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:10.484 17:28:49 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:10.484 00:18:10.484 real 0m18.715s 00:18:10.484 user 0m48.235s 00:18:10.484 sys 0m3.603s 00:18:10.484 17:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:10.484 17:28:49 -- common/autotest_common.sh@10 -- # set +x 00:18:10.484 ************************************ 00:18:10.484 END TEST lvs_grow_dirty 00:18:10.484 ************************************ 00:18:10.484 17:28:49 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:10.484 17:28:49 -- common/autotest_common.sh@796 -- # type=--id 00:18:10.484 17:28:49 -- common/autotest_common.sh@797 -- # id=0 00:18:10.484 17:28:49 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:10.484 17:28:49 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:10.484 17:28:49 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:10.484 17:28:49 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:10.484 17:28:49 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:10.484 17:28:49 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:10.484 nvmf_trace.0 00:18:10.742 17:28:49 -- common/autotest_common.sh@811 -- # return 0 00:18:10.742 17:28:49 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:10.742 17:28:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:10.742 17:28:49 -- nvmf/common.sh@116 -- # sync 00:18:10.742 17:28:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:10.742 17:28:49 -- nvmf/common.sh@119 -- # set +e 00:18:10.742 17:28:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:10.742 17:28:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:10.742 rmmod nvme_tcp 00:18:10.742 rmmod nvme_fabrics 00:18:10.742 rmmod nvme_keyring 00:18:10.742 17:28:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:10.742 17:28:49 -- nvmf/common.sh@123 -- # set -e 00:18:10.742 17:28:49 -- nvmf/common.sh@124 -- # return 0 00:18:10.742 17:28:49 -- nvmf/common.sh@477 -- # '[' -n 4102382 ']' 00:18:10.742 17:28:49 -- nvmf/common.sh@478 -- # killprocess 4102382 00:18:10.742 17:28:49 -- common/autotest_common.sh@926 -- # '[' -z 4102382 ']' 00:18:10.742 17:28:49 -- common/autotest_common.sh@930 -- # kill -0 4102382 00:18:10.742 17:28:49 -- common/autotest_common.sh@931 -- # uname 00:18:10.742 17:28:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:10.742 17:28:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4102382 00:18:10.742 17:28:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:10.742 17:28:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:10.742 17:28:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4102382' 00:18:10.742 killing process with pid 4102382 00:18:10.742 17:28:49 -- common/autotest_common.sh@945 -- # kill 4102382 00:18:10.742 17:28:49 -- common/autotest_common.sh@950 -- # wait 4102382 00:18:11.019 17:28:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:11.019 17:28:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:11.019 17:28:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:11.019 17:28:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.019 17:28:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:11.019 17:28:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.019 17:28:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.019 17:28:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.922 17:28:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:12.922 00:18:12.922 real 0m45.351s 00:18:12.922 user 1m12.397s 00:18:12.922 sys 0m9.682s 00:18:12.922 17:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.922 17:28:51 -- common/autotest_common.sh@10 -- # set +x 00:18:12.922 ************************************ 00:18:12.922 END TEST nvmf_lvs_grow 00:18:12.922 ************************************ 00:18:12.922 17:28:51 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:12.922 17:28:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:12.922 17:28:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:12.922 17:28:51 -- common/autotest_common.sh@10 -- # set +x 00:18:12.922 ************************************ 00:18:12.922 START TEST nvmf_bdev_io_wait 00:18:12.922 ************************************ 00:18:12.922 17:28:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:13.181 * Looking for test storage... 00:18:13.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.181 17:28:51 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.181 17:28:51 -- nvmf/common.sh@7 -- # uname -s 00:18:13.181 17:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.181 17:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.181 17:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.181 17:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.181 17:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.181 17:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.181 17:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.181 17:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.181 17:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.181 17:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.181 17:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:13.181 17:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:13.181 17:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.181 17:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.181 17:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.181 17:28:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.181 17:28:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.181 17:28:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.181 17:28:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.181 17:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.181 17:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.181 17:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.181 17:28:51 -- paths/export.sh@5 -- # export PATH 00:18:13.181 17:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.181 17:28:51 -- nvmf/common.sh@46 -- # : 0 00:18:13.181 17:28:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:13.181 17:28:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:13.181 17:28:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:13.181 17:28:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.181 17:28:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.181 17:28:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:13.181 17:28:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:13.181 17:28:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:13.181 17:28:51 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.181 17:28:51 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.181 17:28:51 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:13.181 17:28:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:13.181 17:28:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.181 17:28:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:13.181 17:28:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:13.181 17:28:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:13.181 17:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.181 17:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.181 17:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.181 17:28:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:13.181 17:28:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:13.181 17:28:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:13.181 17:28:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.507 17:28:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:18.507 17:28:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:18.507 17:28:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:18.507 17:28:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:18.507 17:28:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:18.507 17:28:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:18.507 17:28:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:18.507 17:28:57 -- nvmf/common.sh@294 -- # net_devs=() 00:18:18.507 17:28:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:18.507 17:28:57 -- nvmf/common.sh@295 -- # e810=() 00:18:18.507 17:28:57 -- nvmf/common.sh@295 -- # local -ga e810 00:18:18.507 17:28:57 -- nvmf/common.sh@296 -- # x722=() 00:18:18.507 17:28:57 -- nvmf/common.sh@296 -- # local -ga x722 00:18:18.507 17:28:57 -- nvmf/common.sh@297 -- # mlx=() 00:18:18.507 17:28:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:18.507 17:28:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.507 17:28:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:18.507 17:28:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:18.507 17:28:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:18.507 17:28:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:18.507 17:28:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:18.507 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:18.507 17:28:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:18.507 17:28:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:18.507 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:18.507 17:28:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:18.507 17:28:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:18.507 17:28:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.507 17:28:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:18.507 17:28:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.507 17:28:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:18.507 Found net devices under 0000:af:00.0: cvl_0_0 00:18:18.507 17:28:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.507 17:28:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:18.507 17:28:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.507 17:28:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:18.507 17:28:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.507 17:28:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:18.507 Found net devices under 0000:af:00.1: cvl_0_1 00:18:18.507 17:28:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.507 17:28:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:18.507 17:28:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:18.507 17:28:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:18.507 17:28:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:18.507 17:28:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.507 17:28:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.507 17:28:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.507 17:28:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:18.507 17:28:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.507 17:28:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.507 17:28:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:18.507 17:28:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.507 17:28:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.507 17:28:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:18.507 17:28:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:18.507 17:28:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.507 17:28:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.507 17:28:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.507 17:28:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.507 17:28:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:18.507 17:28:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.766 17:28:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.766 17:28:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.766 17:28:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:18.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:18:18.766 00:18:18.766 --- 10.0.0.2 ping statistics --- 00:18:18.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.766 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:18:18.766 17:28:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:18:18.766 00:18:18.766 --- 10.0.0.1 ping statistics --- 00:18:18.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.766 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:18:18.766 17:28:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.766 17:28:57 -- nvmf/common.sh@410 -- # return 0 00:18:18.766 17:28:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:18.766 17:28:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.766 17:28:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:18.766 17:28:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:18.766 17:28:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.766 17:28:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:18.766 17:28:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:18.766 17:28:57 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:18.766 17:28:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:18.766 17:28:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:18.766 17:28:57 -- common/autotest_common.sh@10 -- # set +x 00:18:18.766 17:28:57 -- nvmf/common.sh@469 -- # nvmfpid=4106975 00:18:18.766 17:28:57 -- nvmf/common.sh@470 -- # waitforlisten 4106975 00:18:18.766 17:28:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:18.766 17:28:57 -- common/autotest_common.sh@819 -- # '[' -z 4106975 ']' 00:18:18.767 17:28:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.767 17:28:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:18.767 17:28:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.767 17:28:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:18.767 17:28:57 -- common/autotest_common.sh@10 -- # set +x 00:18:18.767 [2024-07-12 17:28:57.648704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:18.767 [2024-07-12 17:28:57.648758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.767 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.025 [2024-07-12 17:28:57.741330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.025 [2024-07-12 17:28:57.785745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.025 [2024-07-12 17:28:57.785891] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.025 [2024-07-12 17:28:57.785903] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.025 [2024-07-12 17:28:57.785913] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.025 [2024-07-12 17:28:57.785970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.025 [2024-07-12 17:28:57.785993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.025 [2024-07-12 17:28:57.786087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.025 [2024-07-12 17:28:57.786089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.592 17:28:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:19.592 17:28:58 -- common/autotest_common.sh@852 -- # return 0 00:18:19.592 17:28:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:19.592 17:28:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:19.592 17:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.592 17:28:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.592 17:28:58 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:19.592 17:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:19.592 17:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.592 17:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:19.592 17:28:58 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:19.592 17:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:19.592 17:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.592 17:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:19.592 17:28:58 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:19.592 17:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:19.592 17:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.592 [2024-07-12 17:28:58.531557] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.592 17:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:19.592 17:28:58 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:19.592 17:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:19.592 17:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.852 Malloc0 00:18:19.852 17:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:19.852 17:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:19.852 17:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.852 17:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.852 17:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:19.852 17:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.852 17:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.852 17:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:19.852 17:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.852 [2024-07-12 17:28:58.603154] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.852 17:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4107121 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@30 -- # READ_PID=4107124 00:18:19.852 17:28:58 -- nvmf/common.sh@520 -- # config=() 00:18:19.852 17:28:58 -- nvmf/common.sh@520 -- # local subsystem config 00:18:19.852 17:28:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:19.852 17:28:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:19.852 { 00:18:19.852 "params": { 00:18:19.852 "name": "Nvme$subsystem", 00:18:19.852 "trtype": "$TEST_TRANSPORT", 00:18:19.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:19.852 "adrfam": "ipv4", 00:18:19.852 "trsvcid": "$NVMF_PORT", 00:18:19.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:19.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:19.852 "hdgst": ${hdgst:-false}, 00:18:19.852 "ddgst": ${ddgst:-false} 00:18:19.852 }, 00:18:19.852 "method": "bdev_nvme_attach_controller" 00:18:19.852 } 00:18:19.852 EOF 00:18:19.852 )") 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4107126 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:19.852 17:28:58 -- nvmf/common.sh@520 -- # config=() 00:18:19.852 17:28:58 -- nvmf/common.sh@520 -- # local subsystem config 00:18:19.852 17:28:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:19.852 17:28:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:19.852 { 00:18:19.852 "params": { 00:18:19.852 "name": "Nvme$subsystem", 00:18:19.852 "trtype": "$TEST_TRANSPORT", 00:18:19.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:19.852 "adrfam": "ipv4", 00:18:19.852 "trsvcid": "$NVMF_PORT", 00:18:19.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:19.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:19.852 "hdgst": ${hdgst:-false}, 00:18:19.852 "ddgst": ${ddgst:-false} 00:18:19.852 }, 00:18:19.852 "method": "bdev_nvme_attach_controller" 00:18:19.852 } 00:18:19.852 EOF 00:18:19.852 )") 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4107130 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@35 -- # sync 00:18:19.852 17:28:58 -- nvmf/common.sh@520 -- # config=() 00:18:19.852 17:28:58 -- nvmf/common.sh@542 -- # cat 00:18:19.852 17:28:58 -- nvmf/common.sh@520 -- # local subsystem config 00:18:19.852 17:28:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:19.852 17:28:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:19.852 { 00:18:19.852 "params": { 00:18:19.852 "name": "Nvme$subsystem", 00:18:19.852 "trtype": "$TEST_TRANSPORT", 00:18:19.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:19.852 "adrfam": "ipv4", 00:18:19.852 "trsvcid": "$NVMF_PORT", 00:18:19.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:19.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:19.852 "hdgst": ${hdgst:-false}, 00:18:19.852 "ddgst": ${ddgst:-false} 00:18:19.852 }, 00:18:19.852 "method": "bdev_nvme_attach_controller" 00:18:19.852 } 00:18:19.852 EOF 00:18:19.852 )") 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:19.852 17:28:58 -- nvmf/common.sh@520 -- # config=() 00:18:19.852 17:28:58 -- nvmf/common.sh@520 -- # local subsystem config 00:18:19.852 17:28:58 -- nvmf/common.sh@542 -- # cat 00:18:19.852 17:28:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:19.852 17:28:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:19.852 { 00:18:19.852 "params": { 00:18:19.852 "name": "Nvme$subsystem", 00:18:19.852 "trtype": "$TEST_TRANSPORT", 00:18:19.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:19.852 "adrfam": "ipv4", 00:18:19.852 "trsvcid": "$NVMF_PORT", 00:18:19.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:19.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:19.852 "hdgst": ${hdgst:-false}, 00:18:19.852 "ddgst": ${ddgst:-false} 00:18:19.852 }, 00:18:19.852 "method": "bdev_nvme_attach_controller" 00:18:19.852 } 00:18:19.852 EOF 00:18:19.852 )") 00:18:19.852 17:28:58 -- nvmf/common.sh@542 -- # cat 00:18:19.852 17:28:58 -- target/bdev_io_wait.sh@37 -- # wait 4107121 00:18:19.852 17:28:58 -- nvmf/common.sh@542 -- # cat 00:18:19.852 17:28:58 -- nvmf/common.sh@544 -- # jq . 00:18:19.852 17:28:58 -- nvmf/common.sh@544 -- # jq . 00:18:19.852 17:28:58 -- nvmf/common.sh@544 -- # jq . 00:18:19.852 17:28:58 -- nvmf/common.sh@545 -- # IFS=, 00:18:19.852 17:28:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:19.852 "params": { 00:18:19.852 "name": "Nvme1", 00:18:19.852 "trtype": "tcp", 00:18:19.852 "traddr": "10.0.0.2", 00:18:19.852 "adrfam": "ipv4", 00:18:19.853 "trsvcid": "4420", 00:18:19.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.853 "hdgst": false, 00:18:19.853 "ddgst": false 00:18:19.853 }, 00:18:19.853 "method": "bdev_nvme_attach_controller" 00:18:19.853 }' 00:18:19.853 17:28:58 -- nvmf/common.sh@544 -- # jq . 00:18:19.853 17:28:58 -- nvmf/common.sh@545 -- # IFS=, 00:18:19.853 17:28:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:19.853 "params": { 00:18:19.853 "name": "Nvme1", 00:18:19.853 "trtype": "tcp", 00:18:19.853 "traddr": "10.0.0.2", 00:18:19.853 "adrfam": "ipv4", 00:18:19.853 "trsvcid": "4420", 00:18:19.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.853 "hdgst": false, 00:18:19.853 "ddgst": false 00:18:19.853 }, 00:18:19.853 "method": "bdev_nvme_attach_controller" 00:18:19.853 }' 00:18:19.853 17:28:58 -- nvmf/common.sh@545 -- # IFS=, 00:18:19.853 17:28:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:19.853 "params": { 00:18:19.853 "name": "Nvme1", 00:18:19.853 "trtype": "tcp", 00:18:19.853 "traddr": "10.0.0.2", 00:18:19.853 "adrfam": "ipv4", 00:18:19.853 "trsvcid": "4420", 00:18:19.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.853 "hdgst": false, 00:18:19.853 "ddgst": false 00:18:19.853 }, 00:18:19.853 "method": "bdev_nvme_attach_controller" 00:18:19.853 }' 00:18:19.853 17:28:58 -- nvmf/common.sh@545 -- # IFS=, 00:18:19.853 17:28:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:19.853 "params": { 00:18:19.853 "name": "Nvme1", 00:18:19.853 "trtype": "tcp", 00:18:19.853 "traddr": "10.0.0.2", 00:18:19.853 "adrfam": "ipv4", 00:18:19.853 "trsvcid": "4420", 00:18:19.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.853 "hdgst": false, 00:18:19.853 "ddgst": false 00:18:19.853 }, 00:18:19.853 "method": "bdev_nvme_attach_controller" 00:18:19.853 }' 00:18:19.853 [2024-07-12 17:28:58.653453] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:19.853 [2024-07-12 17:28:58.653514] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:19.853 [2024-07-12 17:28:58.653885] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:19.853 [2024-07-12 17:28:58.653924] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:19.853 [2024-07-12 17:28:58.656651] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:19.853 [2024-07-12 17:28:58.656708] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:19.853 [2024-07-12 17:28:58.657878] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:19.853 [2024-07-12 17:28:58.657931] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:19.853 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.853 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.112 [2024-07-12 17:28:58.827783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.112 [2024-07-12 17:28:58.854744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:20.112 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.112 [2024-07-12 17:28:58.948264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.112 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.112 [2024-07-12 17:28:58.987809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.112 [2024-07-12 17:28:59.012380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.112 [2024-07-12 17:28:59.046932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:20.112 [2024-07-12 17:28:59.073716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.372 [2024-07-12 17:28:59.100553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:20.372 Running I/O for 1 seconds... 00:18:20.372 Running I/O for 1 seconds... 00:18:20.372 Running I/O for 1 seconds... 00:18:20.630 Running I/O for 1 seconds... 00:18:21.567 00:18:21.567 Latency(us) 00:18:21.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.567 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:21.567 Nvme1n1 : 1.01 10573.59 41.30 0.00 0.00 12058.43 7328.12 21567.30 00:18:21.567 =================================================================================================================== 00:18:21.567 Total : 10573.59 41.30 0.00 0.00 12058.43 7328.12 21567.30 00:18:21.567 00:18:21.567 Latency(us) 00:18:21.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.567 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:21.567 Nvme1n1 : 1.02 5064.08 19.78 0.00 0.00 25001.86 9294.20 38368.35 00:18:21.567 =================================================================================================================== 00:18:21.567 Total : 5064.08 19.78 0.00 0.00 25001.86 9294.20 38368.35 00:18:21.567 00:18:21.567 Latency(us) 00:18:21.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.567 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:21.567 Nvme1n1 : 1.00 167376.56 653.81 0.00 0.00 761.90 305.34 1146.88 00:18:21.567 =================================================================================================================== 00:18:21.567 Total : 167376.56 653.81 0.00 0.00 761.90 305.34 1146.88 00:18:21.567 00:18:21.567 Latency(us) 00:18:21.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.567 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:21.567 Nvme1n1 : 1.01 5534.39 21.62 0.00 0.00 23051.37 5868.45 54096.99 00:18:21.567 =================================================================================================================== 00:18:21.567 Total : 5534.39 21.62 0.00 0.00 23051.37 5868.45 54096.99 00:18:21.825 17:29:00 -- target/bdev_io_wait.sh@38 -- # wait 4107124 00:18:21.825 17:29:00 -- target/bdev_io_wait.sh@39 -- # wait 4107126 00:18:21.825 17:29:00 -- target/bdev_io_wait.sh@40 -- # wait 4107130 00:18:21.825 17:29:00 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.825 17:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.825 17:29:00 -- common/autotest_common.sh@10 -- # set +x 00:18:21.825 17:29:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.825 17:29:00 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:21.825 17:29:00 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:21.825 17:29:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:21.825 17:29:00 -- nvmf/common.sh@116 -- # sync 00:18:21.825 17:29:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:21.825 17:29:00 -- nvmf/common.sh@119 -- # set +e 00:18:21.825 17:29:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:21.825 17:29:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:21.825 rmmod nvme_tcp 00:18:21.825 rmmod nvme_fabrics 00:18:21.825 rmmod nvme_keyring 00:18:21.826 17:29:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:21.826 17:29:00 -- nvmf/common.sh@123 -- # set -e 00:18:21.826 17:29:00 -- nvmf/common.sh@124 -- # return 0 00:18:21.826 17:29:00 -- nvmf/common.sh@477 -- # '[' -n 4106975 ']' 00:18:21.826 17:29:00 -- nvmf/common.sh@478 -- # killprocess 4106975 00:18:21.826 17:29:00 -- common/autotest_common.sh@926 -- # '[' -z 4106975 ']' 00:18:21.826 17:29:00 -- common/autotest_common.sh@930 -- # kill -0 4106975 00:18:21.826 17:29:00 -- common/autotest_common.sh@931 -- # uname 00:18:21.826 17:29:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:21.826 17:29:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4106975 00:18:21.826 17:29:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:21.826 17:29:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:21.826 17:29:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4106975' 00:18:21.826 killing process with pid 4106975 00:18:21.826 17:29:00 -- common/autotest_common.sh@945 -- # kill 4106975 00:18:21.826 17:29:00 -- common/autotest_common.sh@950 -- # wait 4106975 00:18:22.085 17:29:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.085 17:29:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.085 17:29:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.085 17:29:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.085 17:29:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.085 17:29:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.085 17:29:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.085 17:29:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.622 17:29:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:24.622 00:18:24.622 real 0m11.166s 00:18:24.622 user 0m19.616s 00:18:24.622 sys 0m5.894s 00:18:24.622 17:29:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.622 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:24.622 ************************************ 00:18:24.622 END TEST nvmf_bdev_io_wait 00:18:24.622 ************************************ 00:18:24.622 17:29:03 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:24.622 17:29:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:24.622 17:29:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:24.622 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:24.622 ************************************ 00:18:24.622 START TEST nvmf_queue_depth 00:18:24.622 ************************************ 00:18:24.622 17:29:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:24.622 * Looking for test storage... 00:18:24.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.622 17:29:03 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.622 17:29:03 -- nvmf/common.sh@7 -- # uname -s 00:18:24.622 17:29:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.622 17:29:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.622 17:29:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.622 17:29:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.622 17:29:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.622 17:29:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.622 17:29:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.622 17:29:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.622 17:29:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.622 17:29:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.622 17:29:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:24.622 17:29:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:24.622 17:29:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.622 17:29:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.622 17:29:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.622 17:29:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.622 17:29:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.622 17:29:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.622 17:29:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.622 17:29:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.622 17:29:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.622 17:29:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.622 17:29:03 -- paths/export.sh@5 -- # export PATH 00:18:24.622 17:29:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.622 17:29:03 -- nvmf/common.sh@46 -- # : 0 00:18:24.622 17:29:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:24.622 17:29:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:24.622 17:29:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:24.622 17:29:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.622 17:29:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.622 17:29:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:24.622 17:29:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:24.622 17:29:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:24.622 17:29:03 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:24.622 17:29:03 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:24.622 17:29:03 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.622 17:29:03 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:24.622 17:29:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:24.622 17:29:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.622 17:29:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:24.622 17:29:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:24.622 17:29:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:24.622 17:29:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.622 17:29:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.622 17:29:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.622 17:29:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:24.622 17:29:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:24.622 17:29:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:24.622 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:29.898 17:29:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:29.898 17:29:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:29.898 17:29:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:29.898 17:29:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:29.898 17:29:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:29.898 17:29:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:29.898 17:29:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:29.898 17:29:08 -- nvmf/common.sh@294 -- # net_devs=() 00:18:29.898 17:29:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:29.898 17:29:08 -- nvmf/common.sh@295 -- # e810=() 00:18:29.898 17:29:08 -- nvmf/common.sh@295 -- # local -ga e810 00:18:29.898 17:29:08 -- nvmf/common.sh@296 -- # x722=() 00:18:29.898 17:29:08 -- nvmf/common.sh@296 -- # local -ga x722 00:18:29.898 17:29:08 -- nvmf/common.sh@297 -- # mlx=() 00:18:29.898 17:29:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:29.898 17:29:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.899 17:29:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:29.899 17:29:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:29.899 17:29:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:29.899 17:29:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:29.899 17:29:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:29.899 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:29.899 17:29:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:29.899 17:29:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:29.899 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:29.899 17:29:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:29.899 17:29:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:29.899 17:29:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.899 17:29:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:29.899 17:29:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.899 17:29:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:29.899 Found net devices under 0000:af:00.0: cvl_0_0 00:18:29.899 17:29:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.899 17:29:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:29.899 17:29:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.899 17:29:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:29.899 17:29:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.899 17:29:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:29.899 Found net devices under 0000:af:00.1: cvl_0_1 00:18:29.899 17:29:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.899 17:29:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:29.899 17:29:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:29.899 17:29:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:29.899 17:29:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.899 17:29:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.899 17:29:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:29.899 17:29:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:29.899 17:29:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:29.899 17:29:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:29.899 17:29:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:29.899 17:29:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:29.899 17:29:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.899 17:29:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:29.899 17:29:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:29.899 17:29:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:29.899 17:29:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:29.899 17:29:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:29.899 17:29:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:29.899 17:29:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:29.899 17:29:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:29.899 17:29:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:29.899 17:29:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:29.899 17:29:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:29.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:18:29.899 00:18:29.899 --- 10.0.0.2 ping statistics --- 00:18:29.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.899 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:18:29.899 17:29:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:29.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:18:29.899 00:18:29.899 --- 10.0.0.1 ping statistics --- 00:18:29.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.899 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:18:29.899 17:29:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.899 17:29:08 -- nvmf/common.sh@410 -- # return 0 00:18:29.899 17:29:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:29.899 17:29:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.899 17:29:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:29.899 17:29:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.899 17:29:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:29.899 17:29:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:30.158 17:29:08 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:30.158 17:29:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:30.158 17:29:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:30.158 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:18:30.158 17:29:08 -- nvmf/common.sh@469 -- # nvmfpid=4111114 00:18:30.158 17:29:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.158 17:29:08 -- nvmf/common.sh@470 -- # waitforlisten 4111114 00:18:30.158 17:29:08 -- common/autotest_common.sh@819 -- # '[' -z 4111114 ']' 00:18:30.158 17:29:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.158 17:29:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:30.158 17:29:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.158 17:29:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:30.158 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:18:30.158 [2024-07-12 17:29:08.941820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:30.158 [2024-07-12 17:29:08.941878] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.158 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.158 [2024-07-12 17:29:09.019359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.158 [2024-07-12 17:29:09.062014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:30.158 [2024-07-12 17:29:09.062157] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.158 [2024-07-12 17:29:09.062168] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.158 [2024-07-12 17:29:09.062178] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.158 [2024-07-12 17:29:09.062197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.094 17:29:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:31.094 17:29:09 -- common/autotest_common.sh@852 -- # return 0 00:18:31.094 17:29:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:31.094 17:29:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:31.094 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 17:29:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.094 17:29:09 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.094 17:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.094 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 [2024-07-12 17:29:09.817796] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.094 17:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.094 17:29:09 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:31.094 17:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.094 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 Malloc0 00:18:31.094 17:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.094 17:29:09 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:31.094 17:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.094 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 17:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.094 17:29:09 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:31.094 17:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.094 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 17:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.094 17:29:09 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.094 17:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.094 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 [2024-07-12 17:29:09.878045] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.094 17:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.094 17:29:09 -- target/queue_depth.sh@30 -- # bdevperf_pid=4111325 00:18:31.094 17:29:09 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:31.094 17:29:09 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.094 17:29:09 -- target/queue_depth.sh@33 -- # waitforlisten 4111325 /var/tmp/bdevperf.sock 00:18:31.094 17:29:09 -- common/autotest_common.sh@819 -- # '[' -z 4111325 ']' 00:18:31.094 17:29:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.094 17:29:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:31.094 17:29:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.094 17:29:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:31.094 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 [2024-07-12 17:29:09.926700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:31.094 [2024-07-12 17:29:09.926753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111325 ] 00:18:31.094 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.094 [2024-07-12 17:29:10.007609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.094 [2024-07-12 17:29:10.051779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.031 17:29:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:32.031 17:29:10 -- common/autotest_common.sh@852 -- # return 0 00:18:32.031 17:29:10 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:32.031 17:29:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.031 17:29:10 -- common/autotest_common.sh@10 -- # set +x 00:18:32.031 NVMe0n1 00:18:32.031 17:29:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.031 17:29:10 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.289 Running I/O for 10 seconds... 00:18:42.261 00:18:42.262 Latency(us) 00:18:42.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.262 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:42.262 Verification LBA range: start 0x0 length 0x4000 00:18:42.262 NVMe0n1 : 10.08 12125.61 47.37 0.00 0.00 84090.54 17158.52 61961.31 00:18:42.262 =================================================================================================================== 00:18:42.262 Total : 12125.61 47.37 0.00 0.00 84090.54 17158.52 61961.31 00:18:42.262 0 00:18:42.262 17:29:21 -- target/queue_depth.sh@39 -- # killprocess 4111325 00:18:42.262 17:29:21 -- common/autotest_common.sh@926 -- # '[' -z 4111325 ']' 00:18:42.262 17:29:21 -- common/autotest_common.sh@930 -- # kill -0 4111325 00:18:42.262 17:29:21 -- common/autotest_common.sh@931 -- # uname 00:18:42.262 17:29:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:42.262 17:29:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4111325 00:18:42.262 17:29:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:42.262 17:29:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:42.262 17:29:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4111325' 00:18:42.262 killing process with pid 4111325 00:18:42.262 17:29:21 -- common/autotest_common.sh@945 -- # kill 4111325 00:18:42.262 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.262 00:18:42.262 Latency(us) 00:18:42.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.262 =================================================================================================================== 00:18:42.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.262 17:29:21 -- common/autotest_common.sh@950 -- # wait 4111325 00:18:42.520 17:29:21 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:42.521 17:29:21 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:42.521 17:29:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:42.521 17:29:21 -- nvmf/common.sh@116 -- # sync 00:18:42.521 17:29:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:42.521 17:29:21 -- nvmf/common.sh@119 -- # set +e 00:18:42.521 17:29:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:42.521 17:29:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:42.521 rmmod nvme_tcp 00:18:42.521 rmmod nvme_fabrics 00:18:42.521 rmmod nvme_keyring 00:18:42.521 17:29:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:42.521 17:29:21 -- nvmf/common.sh@123 -- # set -e 00:18:42.521 17:29:21 -- nvmf/common.sh@124 -- # return 0 00:18:42.521 17:29:21 -- nvmf/common.sh@477 -- # '[' -n 4111114 ']' 00:18:42.521 17:29:21 -- nvmf/common.sh@478 -- # killprocess 4111114 00:18:42.521 17:29:21 -- common/autotest_common.sh@926 -- # '[' -z 4111114 ']' 00:18:42.521 17:29:21 -- common/autotest_common.sh@930 -- # kill -0 4111114 00:18:42.521 17:29:21 -- common/autotest_common.sh@931 -- # uname 00:18:42.521 17:29:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:42.521 17:29:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4111114 00:18:42.780 17:29:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:42.780 17:29:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:42.780 17:29:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4111114' 00:18:42.780 killing process with pid 4111114 00:18:42.780 17:29:21 -- common/autotest_common.sh@945 -- # kill 4111114 00:18:42.780 17:29:21 -- common/autotest_common.sh@950 -- # wait 4111114 00:18:42.780 17:29:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:42.780 17:29:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:42.780 17:29:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:42.780 17:29:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.780 17:29:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:42.780 17:29:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.780 17:29:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.780 17:29:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.314 17:29:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:45.314 00:18:45.314 real 0m20.706s 00:18:45.314 user 0m25.538s 00:18:45.314 sys 0m5.681s 00:18:45.314 17:29:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.314 17:29:23 -- common/autotest_common.sh@10 -- # set +x 00:18:45.314 ************************************ 00:18:45.314 END TEST nvmf_queue_depth 00:18:45.314 ************************************ 00:18:45.314 17:29:23 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:45.314 17:29:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:45.314 17:29:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:45.314 17:29:23 -- common/autotest_common.sh@10 -- # set +x 00:18:45.314 ************************************ 00:18:45.314 START TEST nvmf_multipath 00:18:45.314 ************************************ 00:18:45.314 17:29:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:45.314 * Looking for test storage... 00:18:45.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.314 17:29:23 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.314 17:29:23 -- nvmf/common.sh@7 -- # uname -s 00:18:45.314 17:29:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.314 17:29:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.314 17:29:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.314 17:29:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.314 17:29:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.314 17:29:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.314 17:29:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.314 17:29:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.314 17:29:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.314 17:29:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.314 17:29:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:45.314 17:29:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:45.314 17:29:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.314 17:29:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.314 17:29:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.314 17:29:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.314 17:29:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.314 17:29:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.314 17:29:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.314 17:29:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.314 17:29:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.314 17:29:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.315 17:29:23 -- paths/export.sh@5 -- # export PATH 00:18:45.315 17:29:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.315 17:29:23 -- nvmf/common.sh@46 -- # : 0 00:18:45.315 17:29:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:45.315 17:29:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:45.315 17:29:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:45.315 17:29:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.315 17:29:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.315 17:29:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:45.315 17:29:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:45.315 17:29:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:45.315 17:29:23 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.315 17:29:23 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.315 17:29:23 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:45.315 17:29:23 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.315 17:29:23 -- target/multipath.sh@43 -- # nvmftestinit 00:18:45.315 17:29:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:45.315 17:29:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.315 17:29:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:45.315 17:29:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:45.315 17:29:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:45.315 17:29:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.315 17:29:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.315 17:29:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.315 17:29:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:45.315 17:29:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:45.315 17:29:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:45.315 17:29:23 -- common/autotest_common.sh@10 -- # set +x 00:18:50.585 17:29:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:50.585 17:29:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:50.585 17:29:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:50.585 17:29:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:50.585 17:29:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:50.585 17:29:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:50.585 17:29:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:50.585 17:29:28 -- nvmf/common.sh@294 -- # net_devs=() 00:18:50.585 17:29:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:50.585 17:29:28 -- nvmf/common.sh@295 -- # e810=() 00:18:50.585 17:29:28 -- nvmf/common.sh@295 -- # local -ga e810 00:18:50.585 17:29:28 -- nvmf/common.sh@296 -- # x722=() 00:18:50.585 17:29:28 -- nvmf/common.sh@296 -- # local -ga x722 00:18:50.585 17:29:28 -- nvmf/common.sh@297 -- # mlx=() 00:18:50.585 17:29:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:50.585 17:29:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.585 17:29:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:50.585 17:29:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:50.585 17:29:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:50.585 17:29:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:50.585 17:29:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:50.585 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:50.585 17:29:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:50.585 17:29:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:50.585 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:50.585 17:29:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:50.585 17:29:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:50.585 17:29:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.585 17:29:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:50.585 17:29:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.585 17:29:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:50.585 Found net devices under 0000:af:00.0: cvl_0_0 00:18:50.585 17:29:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.585 17:29:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:50.585 17:29:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.585 17:29:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:50.585 17:29:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.585 17:29:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:50.585 Found net devices under 0000:af:00.1: cvl_0_1 00:18:50.585 17:29:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.585 17:29:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:50.585 17:29:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:50.585 17:29:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:50.585 17:29:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:50.585 17:29:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.585 17:29:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.585 17:29:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.585 17:29:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:50.585 17:29:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.585 17:29:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.585 17:29:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:50.585 17:29:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.585 17:29:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.585 17:29:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:50.585 17:29:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:50.585 17:29:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.585 17:29:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.585 17:29:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.585 17:29:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.585 17:29:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:50.585 17:29:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.585 17:29:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.585 17:29:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.585 17:29:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:50.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:18:50.585 00:18:50.585 --- 10.0.0.2 ping statistics --- 00:18:50.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.585 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:18:50.585 17:29:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:18:50.585 00:18:50.585 --- 10.0.0.1 ping statistics --- 00:18:50.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.585 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:50.585 17:29:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.585 17:29:29 -- nvmf/common.sh@410 -- # return 0 00:18:50.585 17:29:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:50.585 17:29:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.585 17:29:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:50.585 17:29:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:50.585 17:29:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.585 17:29:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:50.585 17:29:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:50.585 17:29:29 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:50.585 17:29:29 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:50.585 only one NIC for nvmf test 00:18:50.585 17:29:29 -- target/multipath.sh@47 -- # nvmftestfini 00:18:50.585 17:29:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:50.585 17:29:29 -- nvmf/common.sh@116 -- # sync 00:18:50.585 17:29:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:50.585 17:29:29 -- nvmf/common.sh@119 -- # set +e 00:18:50.585 17:29:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:50.585 17:29:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:50.585 rmmod nvme_tcp 00:18:50.585 rmmod nvme_fabrics 00:18:50.585 rmmod nvme_keyring 00:18:50.585 17:29:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:50.585 17:29:29 -- nvmf/common.sh@123 -- # set -e 00:18:50.585 17:29:29 -- nvmf/common.sh@124 -- # return 0 00:18:50.585 17:29:29 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:50.585 17:29:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:50.585 17:29:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:50.585 17:29:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:50.585 17:29:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.585 17:29:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:50.585 17:29:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.585 17:29:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.585 17:29:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.489 17:29:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:52.489 17:29:31 -- target/multipath.sh@48 -- # exit 0 00:18:52.489 17:29:31 -- target/multipath.sh@1 -- # nvmftestfini 00:18:52.489 17:29:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:52.489 17:29:31 -- nvmf/common.sh@116 -- # sync 00:18:52.489 17:29:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:52.489 17:29:31 -- nvmf/common.sh@119 -- # set +e 00:18:52.489 17:29:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:52.489 17:29:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:52.489 17:29:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:52.489 17:29:31 -- nvmf/common.sh@123 -- # set -e 00:18:52.489 17:29:31 -- nvmf/common.sh@124 -- # return 0 00:18:52.489 17:29:31 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:52.489 17:29:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:52.489 17:29:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:52.489 17:29:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:52.489 17:29:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.489 17:29:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:52.489 17:29:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.489 17:29:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.489 17:29:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.489 17:29:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:52.489 00:18:52.489 real 0m7.511s 00:18:52.489 user 0m1.430s 00:18:52.489 sys 0m4.013s 00:18:52.489 17:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.489 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:18:52.489 ************************************ 00:18:52.489 END TEST nvmf_multipath 00:18:52.489 ************************************ 00:18:52.489 17:29:31 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:52.489 17:29:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:52.489 17:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:52.489 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:18:52.489 ************************************ 00:18:52.489 START TEST nvmf_zcopy 00:18:52.489 ************************************ 00:18:52.489 17:29:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:52.489 * Looking for test storage... 00:18:52.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.489 17:29:31 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.489 17:29:31 -- nvmf/common.sh@7 -- # uname -s 00:18:52.489 17:29:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.489 17:29:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.489 17:29:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.747 17:29:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.747 17:29:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.747 17:29:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.747 17:29:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.747 17:29:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.747 17:29:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.747 17:29:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.747 17:29:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:52.747 17:29:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:52.747 17:29:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.747 17:29:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.747 17:29:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.747 17:29:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.747 17:29:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.747 17:29:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.747 17:29:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.748 17:29:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.748 17:29:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.748 17:29:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.748 17:29:31 -- paths/export.sh@5 -- # export PATH 00:18:52.748 17:29:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.748 17:29:31 -- nvmf/common.sh@46 -- # : 0 00:18:52.748 17:29:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:52.748 17:29:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:52.748 17:29:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:52.748 17:29:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.748 17:29:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.748 17:29:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:52.748 17:29:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:52.748 17:29:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:52.748 17:29:31 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:52.748 17:29:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:52.748 17:29:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.748 17:29:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:52.748 17:29:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:52.748 17:29:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:52.748 17:29:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.748 17:29:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.748 17:29:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.748 17:29:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:52.748 17:29:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:52.748 17:29:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:52.748 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:18:58.020 17:29:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:58.020 17:29:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:58.020 17:29:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:58.020 17:29:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:58.020 17:29:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:58.020 17:29:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:58.020 17:29:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:58.020 17:29:36 -- nvmf/common.sh@294 -- # net_devs=() 00:18:58.020 17:29:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:58.020 17:29:36 -- nvmf/common.sh@295 -- # e810=() 00:18:58.020 17:29:36 -- nvmf/common.sh@295 -- # local -ga e810 00:18:58.020 17:29:36 -- nvmf/common.sh@296 -- # x722=() 00:18:58.020 17:29:36 -- nvmf/common.sh@296 -- # local -ga x722 00:18:58.020 17:29:36 -- nvmf/common.sh@297 -- # mlx=() 00:18:58.020 17:29:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:58.020 17:29:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.020 17:29:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:58.020 17:29:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:58.020 17:29:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:58.020 17:29:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:58.020 17:29:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:58.020 17:29:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:58.020 17:29:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:58.020 17:29:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:58.020 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:58.021 17:29:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:58.021 17:29:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:58.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:58.021 17:29:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:58.021 17:29:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:58.021 17:29:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.021 17:29:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:58.021 17:29:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.021 17:29:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:58.021 Found net devices under 0000:af:00.0: cvl_0_0 00:18:58.021 17:29:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.021 17:29:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:58.021 17:29:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.021 17:29:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:58.021 17:29:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.021 17:29:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:58.021 Found net devices under 0000:af:00.1: cvl_0_1 00:18:58.021 17:29:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.021 17:29:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:58.021 17:29:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:58.021 17:29:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:58.021 17:29:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:58.021 17:29:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.021 17:29:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.021 17:29:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.021 17:29:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:58.021 17:29:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.021 17:29:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.021 17:29:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:58.021 17:29:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.021 17:29:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.021 17:29:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:58.021 17:29:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:58.021 17:29:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.021 17:29:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.021 17:29:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.021 17:29:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.021 17:29:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:58.021 17:29:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.280 17:29:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.280 17:29:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.280 17:29:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:58.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:18:58.280 00:18:58.280 --- 10.0.0.2 ping statistics --- 00:18:58.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.280 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:58.280 17:29:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:18:58.280 00:18:58.280 --- 10.0.0.1 ping statistics --- 00:18:58.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.280 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:18:58.280 17:29:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.280 17:29:37 -- nvmf/common.sh@410 -- # return 0 00:18:58.280 17:29:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:58.280 17:29:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.280 17:29:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:58.280 17:29:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:58.280 17:29:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.280 17:29:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:58.280 17:29:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:58.280 17:29:37 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:58.281 17:29:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:58.281 17:29:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:58.281 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.281 17:29:37 -- nvmf/common.sh@469 -- # nvmfpid=4120415 00:18:58.281 17:29:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:58.281 17:29:37 -- nvmf/common.sh@470 -- # waitforlisten 4120415 00:18:58.281 17:29:37 -- common/autotest_common.sh@819 -- # '[' -z 4120415 ']' 00:18:58.281 17:29:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.281 17:29:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:58.281 17:29:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.281 17:29:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:58.281 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.281 [2024-07-12 17:29:37.127826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:58.281 [2024-07-12 17:29:37.127869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.281 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.281 [2024-07-12 17:29:37.194656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.281 [2024-07-12 17:29:37.237027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:58.281 [2024-07-12 17:29:37.237167] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.281 [2024-07-12 17:29:37.237180] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.281 [2024-07-12 17:29:37.237189] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.281 [2024-07-12 17:29:37.237215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.540 17:29:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:58.540 17:29:37 -- common/autotest_common.sh@852 -- # return 0 00:18:58.540 17:29:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:58.540 17:29:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:58.540 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.540 17:29:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.540 17:29:37 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:58.540 17:29:37 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:58.540 17:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.540 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.540 [2024-07-12 17:29:37.387123] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.540 17:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.540 17:29:37 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:58.540 17:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.540 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.540 17:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.540 17:29:37 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.540 17:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.540 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.540 [2024-07-12 17:29:37.403308] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.540 17:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.540 17:29:37 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:58.540 17:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.540 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.540 17:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.540 17:29:37 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:58.540 17:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.540 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.540 malloc0 00:18:58.540 17:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.540 17:29:37 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.540 17:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.540 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.540 17:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.540 17:29:37 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:58.540 17:29:37 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:58.540 17:29:37 -- nvmf/common.sh@520 -- # config=() 00:18:58.540 17:29:37 -- nvmf/common.sh@520 -- # local subsystem config 00:18:58.540 17:29:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:58.540 17:29:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:58.540 { 00:18:58.540 "params": { 00:18:58.540 "name": "Nvme$subsystem", 00:18:58.540 "trtype": "$TEST_TRANSPORT", 00:18:58.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.540 "adrfam": "ipv4", 00:18:58.540 "trsvcid": "$NVMF_PORT", 00:18:58.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.540 "hdgst": ${hdgst:-false}, 00:18:58.540 "ddgst": ${ddgst:-false} 00:18:58.540 }, 00:18:58.540 "method": "bdev_nvme_attach_controller" 00:18:58.540 } 00:18:58.540 EOF 00:18:58.540 )") 00:18:58.540 17:29:37 -- nvmf/common.sh@542 -- # cat 00:18:58.540 17:29:37 -- nvmf/common.sh@544 -- # jq . 00:18:58.540 17:29:37 -- nvmf/common.sh@545 -- # IFS=, 00:18:58.540 17:29:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:58.540 "params": { 00:18:58.540 "name": "Nvme1", 00:18:58.540 "trtype": "tcp", 00:18:58.540 "traddr": "10.0.0.2", 00:18:58.540 "adrfam": "ipv4", 00:18:58.540 "trsvcid": "4420", 00:18:58.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.540 "hdgst": false, 00:18:58.540 "ddgst": false 00:18:58.540 }, 00:18:58.540 "method": "bdev_nvme_attach_controller" 00:18:58.540 }' 00:18:58.540 [2024-07-12 17:29:37.482300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:58.540 [2024-07-12 17:29:37.482355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4120601 ] 00:18:58.799 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.799 [2024-07-12 17:29:37.563159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.799 [2024-07-12 17:29:37.604561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.058 Running I/O for 10 seconds... 00:19:09.037 00:19:09.037 Latency(us) 00:19:09.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.037 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:09.037 Verification LBA range: start 0x0 length 0x1000 00:19:09.037 Nvme1n1 : 10.01 8555.88 66.84 0.00 0.00 14920.71 2234.18 21567.30 00:19:09.037 =================================================================================================================== 00:19:09.037 Total : 8555.88 66.84 0.00 0.00 14920.71 2234.18 21567.30 00:19:09.296 17:29:48 -- target/zcopy.sh@39 -- # perfpid=4122546 00:19:09.296 17:29:48 -- target/zcopy.sh@41 -- # xtrace_disable 00:19:09.296 17:29:48 -- common/autotest_common.sh@10 -- # set +x 00:19:09.296 17:29:48 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:09.296 17:29:48 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:09.296 17:29:48 -- nvmf/common.sh@520 -- # config=() 00:19:09.296 17:29:48 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.296 17:29:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.296 17:29:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.296 { 00:19:09.296 "params": { 00:19:09.296 "name": "Nvme$subsystem", 00:19:09.296 "trtype": "$TEST_TRANSPORT", 00:19:09.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.296 "adrfam": "ipv4", 00:19:09.296 "trsvcid": "$NVMF_PORT", 00:19:09.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.296 "hdgst": ${hdgst:-false}, 00:19:09.296 "ddgst": ${ddgst:-false} 00:19:09.296 }, 00:19:09.296 "method": "bdev_nvme_attach_controller" 00:19:09.296 } 00:19:09.296 EOF 00:19:09.296 )") 00:19:09.296 17:29:48 -- nvmf/common.sh@542 -- # cat 00:19:09.296 [2024-07-12 17:29:48.025855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.025891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 17:29:48 -- nvmf/common.sh@544 -- # jq . 00:19:09.296 17:29:48 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.296 17:29:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.296 "params": { 00:19:09.296 "name": "Nvme1", 00:19:09.296 "trtype": "tcp", 00:19:09.296 "traddr": "10.0.0.2", 00:19:09.296 "adrfam": "ipv4", 00:19:09.296 "trsvcid": "4420", 00:19:09.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.296 "hdgst": false, 00:19:09.296 "ddgst": false 00:19:09.296 }, 00:19:09.296 "method": "bdev_nvme_attach_controller" 00:19:09.296 }' 00:19:09.296 [2024-07-12 17:29:48.033850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.033867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.041869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.041884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.049893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.049907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.057916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.057936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.064860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:09.296 [2024-07-12 17:29:48.064915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122546 ] 00:19:09.296 [2024-07-12 17:29:48.065941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.065957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.073962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.073976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.081988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.082001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.090011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.090024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.296 [2024-07-12 17:29:48.098033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.098047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.106053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.106065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.114076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.114089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.122099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.122112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.130124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.130136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.138147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.138161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.146168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.146181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.146413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.296 [2024-07-12 17:29:48.154191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.154205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.162215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.162229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.170243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.170271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.178267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.178281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.186287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.186301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.187779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.296 [2024-07-12 17:29:48.194314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.194333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.202334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.202352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.210354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.210371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.218381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.218397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.226402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.226417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.234423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.234437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.242445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.242460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.250469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.250483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.296 [2024-07-12 17:29:48.258491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.296 [2024-07-12 17:29:48.258505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.266514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.266527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.274560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.274587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.282570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.282589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.290593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.290610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.298616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.298634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.306638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.306652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.314659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.314672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.322685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.322699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.330709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.330722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.338734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.338755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.346756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.346772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.354780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.354798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.362802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.362818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.370825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.370839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.378854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.378876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 Running I/O for 5 seconds... 00:19:09.555 [2024-07-12 17:29:48.386873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.386887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.400817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.400841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.411768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.411792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.422539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.422562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.433335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.433358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.444364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.444387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.455954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.455977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.467134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.467158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.478129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.478152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.488859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.488885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.499589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.499613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.510445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.510468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.555 [2024-07-12 17:29:48.523455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.555 [2024-07-12 17:29:48.523478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.532983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.533007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.544535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.544558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.555119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.555143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.565706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.565729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.576556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.576578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.589566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.589589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.599396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.599420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.610729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.610752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.621903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.621925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.632720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.632743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.646201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.646224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.656696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.656719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.667633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.667655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.680677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.680700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.690300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.690323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.701824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.701848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.712967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.712991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.724005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.724028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.735039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.735062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.745910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.745934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.759126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.759149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.769519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.769542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.814 [2024-07-12 17:29:48.779910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.814 [2024-07-12 17:29:48.779933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.072 [2024-07-12 17:29:48.790399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.072 [2024-07-12 17:29:48.790427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.072 [2024-07-12 17:29:48.801358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.072 [2024-07-12 17:29:48.801381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.072 [2024-07-12 17:29:48.812648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.072 [2024-07-12 17:29:48.812671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.072 [2024-07-12 17:29:48.823270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.072 [2024-07-12 17:29:48.823294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.072 [2024-07-12 17:29:48.834340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.834364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.845092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.845114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.858028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.858051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.867402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.867424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.878844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.878867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.889765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.889788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.900420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.900444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.913667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.913690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.923750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.923773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.934661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.934686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.945629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.945658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.956548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.956572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.969294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.969317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.979896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.979920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:48.990651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:48.990675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:49.004041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:49.004066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:49.013897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:49.013921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:49.024802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:49.024826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.073 [2024-07-12 17:29:49.037589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.073 [2024-07-12 17:29:49.037612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.049610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.049635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.059406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.059429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.071129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.071153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.081821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.081845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.092731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.092755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.103700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.103723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.116684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.116708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.127204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.127229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.138058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.138081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.150949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.150972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.160601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.160631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.171800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.171824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.184666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.184689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.195499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.195523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.206482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.206505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.219092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.219115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.229160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.229183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.240012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.240035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.252928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.252951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.330 [2024-07-12 17:29:49.263142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.330 [2024-07-12 17:29:49.263166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.331 [2024-07-12 17:29:49.273812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.331 [2024-07-12 17:29:49.273835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.331 [2024-07-12 17:29:49.286571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.331 [2024-07-12 17:29:49.286595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.331 [2024-07-12 17:29:49.296680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.331 [2024-07-12 17:29:49.296705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.307348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.307372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.320150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.320174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.329410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.329435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.341058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.341082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.351989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.352013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.363059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.363082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.374018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.374047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.385245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.385274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.395477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.395499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.406545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.406567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.419594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.419617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.431790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.431814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.441136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.441160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.452922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.452945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.465363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.465386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.475417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.475441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.486265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.486288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.497270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.497294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.508424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.508448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.519425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.519447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.532292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.532316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.542324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.542347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.589 [2024-07-12 17:29:49.553121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.589 [2024-07-12 17:29:49.553145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.566079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.566103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.576088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.576111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.586916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.586943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.600124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.600147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.610398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.610421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.621519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.621542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.634404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.634427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.644494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.644518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.655931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.655954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.666999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.667022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.678067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.678090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.690788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.690811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.701223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.701246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.711998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.712021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.722584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.722607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.733158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.733182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.744062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.744085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.755194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.755217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.765798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.765821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.776700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.776722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.789269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.789292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.799439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.799469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.810308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.810331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.821369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.821392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.834396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.834420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.879 [2024-07-12 17:29:49.844246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.879 [2024-07-12 17:29:49.844275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.855759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.855782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.866617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.866641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.877660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.877682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.890640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.890663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.900986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.901009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.912034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.912057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.925614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.925637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.936211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.936234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.946955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.946979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.957707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.957731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.968172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.166 [2024-07-12 17:29:49.968196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.166 [2024-07-12 17:29:49.979147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:49.979171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:49.990134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:49.990156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.003533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.003556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.013607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.013630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.024317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.024340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.035185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.035209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.046287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.046311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.059157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.059180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.069213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.069235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.080174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.080197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.091476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.091500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.102245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.102274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.112923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.112946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.167 [2024-07-12 17:29:50.123855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.167 [2024-07-12 17:29:50.123877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.136667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.136690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.147047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.147070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.158002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.158024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.171141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.171163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.181551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.181574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.192267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.192290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.202924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.202947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.214172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.214195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.224911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.224933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.238367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.238391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.248863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.248886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.259588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.259611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.270172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.270196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.280923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.280946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.291790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.291813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.302440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.302463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.313569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.313593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.325094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.325117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.335773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.335796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.346703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.346728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.357603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.357627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.370302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.370325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.380395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.380418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.426 [2024-07-12 17:29:50.391346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.426 [2024-07-12 17:29:50.391369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.402091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.402115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.412593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.412617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.423169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.423193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.433956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.433979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.447078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.447102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.457336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.457360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.468144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.468168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.478890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.478913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.489656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.489679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.500781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.500804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.511664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.511687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.524453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.524477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.534407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.534431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.545755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.545778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.558534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.558557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.568102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.568126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.578767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.578791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.589623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.589647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.600362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.600387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.611055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.611080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.622055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.622079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.635018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.635047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.685 [2024-07-12 17:29:50.645011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.685 [2024-07-12 17:29:50.645035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.656106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.656130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.668945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.668970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.678389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.678413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.689809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.689832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.701055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.701079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.712085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.712108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.722686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.722709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.733781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.733804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.746764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.746788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.757069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.757093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.768273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.944 [2024-07-12 17:29:50.768309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.944 [2024-07-12 17:29:50.778897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.778921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.789625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.789649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.800397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.800420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.811229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.811252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.822146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.822170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.833393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.833417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.846609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.846637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.856947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.856971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.867761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.867784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.878826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.878850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.889666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.889689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.900579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.900602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.945 [2024-07-12 17:29:50.911804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.945 [2024-07-12 17:29:50.911827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:50.924750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:50.924773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:50.934817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:50.934840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:50.945939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:50.945961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:50.958797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:50.958820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:50.969246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:50.969276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:50.979678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:50.979703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:50.990557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:50.990580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.003260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.003284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.013075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.013098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.024433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.024456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.037506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.037529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.047780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.047803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.059007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.059035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.072243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.072273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.082434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.082457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.093272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.093295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.106348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.106371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.117164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.117188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.128184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.128207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.141172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.141194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.151270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.151292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.204 [2024-07-12 17:29:51.161593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.204 [2024-07-12 17:29:51.161615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.462 [2024-07-12 17:29:51.172445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.172468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.186045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.186068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.196583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.196606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.207525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.207547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.220640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.220663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.230527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.230550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.241627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.241650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.254484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.254507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.264763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.264786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.275516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.275543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.288366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.288399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.299175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.299198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.309845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.309867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.322557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.322579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.332846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.332868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.343747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.343770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.356397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.356420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.366309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.366333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.377196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.377219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.388367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.388390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.399246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.399277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.412400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.412422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.463 [2024-07-12 17:29:51.423035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.463 [2024-07-12 17:29:51.423058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.434117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.434139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.446919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.446942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.456186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.456208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.467768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.467791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.478741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.478763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.489842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.489869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.502514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.502537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.512679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.512702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.523563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.523586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.536106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.536129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.546422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.546444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.557525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.557546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.568551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.568574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.579480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.579502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.592586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.592608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.603206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.603229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.614063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.614085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.722 [2024-07-12 17:29:51.626654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.722 [2024-07-12 17:29:51.626678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.723 [2024-07-12 17:29:51.637023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.723 [2024-07-12 17:29:51.637046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.723 [2024-07-12 17:29:51.648059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.723 [2024-07-12 17:29:51.648080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.723 [2024-07-12 17:29:51.660983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.723 [2024-07-12 17:29:51.661006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.723 [2024-07-12 17:29:51.670991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.723 [2024-07-12 17:29:51.671014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.723 [2024-07-12 17:29:51.681784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.723 [2024-07-12 17:29:51.681807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.694463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.694486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.704638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.704661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.715365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.715389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.725945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.725968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.736761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.736786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.747637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.747661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.761038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.761061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.771402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.771426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.782562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.782585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.795387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.795410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.805410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.805433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.816001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.816025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.826988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.827012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.838042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.838065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.851155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.851179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.861601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.861625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.872623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.872646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.885839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.885863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.896466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.896490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.907227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.907250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.919948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.919972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.929909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.929933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.981 [2024-07-12 17:29:51.940902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.981 [2024-07-12 17:29:51.940926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:51.953878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:51.953901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:51.964243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:51.964275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:51.974931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:51.974955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:51.985739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:51.985763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:51.996516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:51.996540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.007250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.007282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.020304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.020328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.030929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.030953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.041962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.041986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.052682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.052706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.063597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.063620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.076812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.076835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.087091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.087116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.097755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.097778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.110450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.110474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.120844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.120867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.131801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.131825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.144832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.144856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.155231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.155263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.166195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.166218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.178788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.178812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.188905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.188929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.242 [2024-07-12 17:29:52.199598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.242 [2024-07-12 17:29:52.199621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.210722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.210746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.221707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.221731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.232497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.232521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.243530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.243553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.256546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.256569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.266634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.266657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.277461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.277484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.290355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.290378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.300527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.300551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.311713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.311736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.322630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.322654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.333663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.333691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.346624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.346647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.357213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.357235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.368021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.368044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.379026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.379048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.390133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.390155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.403267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.403292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.413232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.413264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.424240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.424269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.437310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.437334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.447938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.447962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.501 [2024-07-12 17:29:52.458694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.501 [2024-07-12 17:29:52.458718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.469881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.469905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.481021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.481044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.491933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.491956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.503236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.503265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.514278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.514301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.525570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.525593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.536352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.536383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.547575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.547603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.558555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.558578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.569412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.569434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.582514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.582536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.592629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.592651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.603237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.603266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.614404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.614427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.625741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.625763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.637204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.637226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.647916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.647940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.658701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.658724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.669453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.669476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.680370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.680393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.691519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.691542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.704524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.704547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.714462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.714485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.760 [2024-07-12 17:29:52.725389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.760 [2024-07-12 17:29:52.725412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.738304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.738327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.748376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.748399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.759950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.759977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.770912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.770936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.781553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.781576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.792810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.792833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.803863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.803885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.815215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.815238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.826626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.826650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.837289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.837313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.848152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.848174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.858866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.858889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.869668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.869690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.880828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.880850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.891827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.891849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.904720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.904743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.914710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.914733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.925692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.925715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.938478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.938501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.948118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.948140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.959440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.959463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.970305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.970336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.019 [2024-07-12 17:29:52.981493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.019 [2024-07-12 17:29:52.981516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:52.996100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:52.996123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.006374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.006397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.017592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.017616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.028392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.028416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.039465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.039488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.052695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.052717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.063330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.063354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.074342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.074364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.087008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.087031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.096673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.096695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.107953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.107975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.118556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.118579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.129614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.129636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.142926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.142949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.153036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.153059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.163558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.163581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.174530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.174553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.187365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.187393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.197774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.197798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.208657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.208680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.219583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.219607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.230169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.230192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.278 [2024-07-12 17:29:53.240817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.278 [2024-07-12 17:29:53.240840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.251373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.251396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.262422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.262445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.275094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.275117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.285493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.285516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.296163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.296187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.306878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.306901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.317815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.317838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.330769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.330792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.343157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.343182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.352358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.352382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.363932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.363956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.376481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.376506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.385622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.385646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.398828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.398852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.406915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.406938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 00:19:14.537 Latency(us) 00:19:14.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.537 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:14.537 Nvme1n1 : 5.01 11661.45 91.11 0.00 0.00 10964.27 5004.57 21686.46 00:19:14.537 =================================================================================================================== 00:19:14.537 Total : 11661.45 91.11 0.00 0.00 10964.27 5004.57 21686.46 00:19:14.537 [2024-07-12 17:29:53.414501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.414521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.422522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.422542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.430541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.430555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.438572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.438592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.446589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.446605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.454609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.454623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.462631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.462646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.470654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.470668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.478678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.478696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.486699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.486714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.494723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.494737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.537 [2024-07-12 17:29:53.502748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.537 [2024-07-12 17:29:53.502762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.510771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.510786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.518791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.518805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.526813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.526828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.534839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.534854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.542858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.542872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.550877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.550891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.558904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.558918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.566924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.566938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.574946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.574959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.582970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.582984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 [2024-07-12 17:29:53.591005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.797 [2024-07-12 17:29:53.591027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4122546) - No such process 00:19:14.797 17:29:53 -- target/zcopy.sh@49 -- # wait 4122546 00:19:14.797 17:29:53 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:14.797 17:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.797 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:14.797 17:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.797 17:29:53 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:14.797 17:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.797 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:14.797 delay0 00:19:14.797 17:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.797 17:29:53 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:14.797 17:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.797 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:14.797 17:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.797 17:29:53 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:14.797 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.797 [2024-07-12 17:29:53.685700] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:21.366 Initializing NVMe Controllers 00:19:21.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:21.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:21.366 Initialization complete. Launching workers. 00:19:21.366 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 100 00:19:21.366 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 33 00:19:21.366 success 218, unsuccess 169, failed 0 00:19:21.366 17:29:59 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:21.366 17:29:59 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:21.366 17:29:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:21.366 17:29:59 -- nvmf/common.sh@116 -- # sync 00:19:21.366 17:29:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:21.366 17:29:59 -- nvmf/common.sh@119 -- # set +e 00:19:21.366 17:29:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:21.366 17:29:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:21.366 rmmod nvme_tcp 00:19:21.366 rmmod nvme_fabrics 00:19:21.366 rmmod nvme_keyring 00:19:21.366 17:29:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:21.366 17:29:59 -- nvmf/common.sh@123 -- # set -e 00:19:21.366 17:29:59 -- nvmf/common.sh@124 -- # return 0 00:19:21.366 17:29:59 -- nvmf/common.sh@477 -- # '[' -n 4120415 ']' 00:19:21.366 17:29:59 -- nvmf/common.sh@478 -- # killprocess 4120415 00:19:21.366 17:29:59 -- common/autotest_common.sh@926 -- # '[' -z 4120415 ']' 00:19:21.366 17:29:59 -- common/autotest_common.sh@930 -- # kill -0 4120415 00:19:21.366 17:29:59 -- common/autotest_common.sh@931 -- # uname 00:19:21.366 17:29:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:21.366 17:29:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4120415 00:19:21.366 17:29:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:21.366 17:29:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:21.366 17:29:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4120415' 00:19:21.366 killing process with pid 4120415 00:19:21.366 17:29:59 -- common/autotest_common.sh@945 -- # kill 4120415 00:19:21.366 17:29:59 -- common/autotest_common.sh@950 -- # wait 4120415 00:19:21.366 17:30:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:21.366 17:30:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:21.366 17:30:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:21.366 17:30:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.366 17:30:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:21.366 17:30:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.366 17:30:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.366 17:30:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.271 17:30:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:23.271 00:19:23.271 real 0m30.753s 00:19:23.271 user 0m42.478s 00:19:23.271 sys 0m9.725s 00:19:23.271 17:30:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.271 17:30:02 -- common/autotest_common.sh@10 -- # set +x 00:19:23.271 ************************************ 00:19:23.271 END TEST nvmf_zcopy 00:19:23.271 ************************************ 00:19:23.271 17:30:02 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:23.271 17:30:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:23.271 17:30:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:23.271 17:30:02 -- common/autotest_common.sh@10 -- # set +x 00:19:23.271 ************************************ 00:19:23.271 START TEST nvmf_nmic 00:19:23.271 ************************************ 00:19:23.271 17:30:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:23.271 * Looking for test storage... 00:19:23.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:23.531 17:30:02 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.531 17:30:02 -- nvmf/common.sh@7 -- # uname -s 00:19:23.531 17:30:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.531 17:30:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.531 17:30:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.531 17:30:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.531 17:30:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.531 17:30:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.531 17:30:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.531 17:30:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.531 17:30:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.531 17:30:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.531 17:30:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:23.531 17:30:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:23.531 17:30:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.531 17:30:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.531 17:30:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.531 17:30:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.531 17:30:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.531 17:30:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.531 17:30:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.531 17:30:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.531 17:30:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.531 17:30:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.531 17:30:02 -- paths/export.sh@5 -- # export PATH 00:19:23.531 17:30:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.531 17:30:02 -- nvmf/common.sh@46 -- # : 0 00:19:23.531 17:30:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:23.531 17:30:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:23.531 17:30:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:23.531 17:30:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.531 17:30:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.531 17:30:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:23.531 17:30:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:23.531 17:30:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:23.531 17:30:02 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:23.531 17:30:02 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:23.531 17:30:02 -- target/nmic.sh@14 -- # nvmftestinit 00:19:23.531 17:30:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:23.531 17:30:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.531 17:30:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:23.531 17:30:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:23.531 17:30:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:23.531 17:30:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.531 17:30:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.531 17:30:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.531 17:30:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:23.531 17:30:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:23.531 17:30:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:23.531 17:30:02 -- common/autotest_common.sh@10 -- # set +x 00:19:28.839 17:30:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:28.839 17:30:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:28.839 17:30:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:28.839 17:30:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:28.839 17:30:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:28.839 17:30:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:28.839 17:30:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:28.839 17:30:07 -- nvmf/common.sh@294 -- # net_devs=() 00:19:28.839 17:30:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:28.839 17:30:07 -- nvmf/common.sh@295 -- # e810=() 00:19:28.839 17:30:07 -- nvmf/common.sh@295 -- # local -ga e810 00:19:28.839 17:30:07 -- nvmf/common.sh@296 -- # x722=() 00:19:28.839 17:30:07 -- nvmf/common.sh@296 -- # local -ga x722 00:19:28.839 17:30:07 -- nvmf/common.sh@297 -- # mlx=() 00:19:28.839 17:30:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:28.839 17:30:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.839 17:30:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:28.839 17:30:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:28.839 17:30:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:28.839 17:30:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.839 17:30:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:28.839 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:28.839 17:30:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.839 17:30:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:28.839 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:28.839 17:30:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:28.839 17:30:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:28.839 17:30:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.839 17:30:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.839 17:30:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.839 17:30:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.839 17:30:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:28.839 Found net devices under 0000:af:00.0: cvl_0_0 00:19:28.839 17:30:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.839 17:30:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.839 17:30:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.839 17:30:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.840 17:30:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.840 17:30:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:28.840 Found net devices under 0000:af:00.1: cvl_0_1 00:19:28.840 17:30:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.840 17:30:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:28.840 17:30:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:28.840 17:30:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:28.840 17:30:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:28.840 17:30:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:28.840 17:30:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.840 17:30:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.840 17:30:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.840 17:30:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:28.840 17:30:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.840 17:30:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.840 17:30:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:28.840 17:30:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.840 17:30:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.840 17:30:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:28.840 17:30:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:28.840 17:30:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.840 17:30:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.840 17:30:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.840 17:30:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.840 17:30:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:28.840 17:30:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.099 17:30:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.099 17:30:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.099 17:30:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:29.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:19:29.099 00:19:29.099 --- 10.0.0.2 ping statistics --- 00:19:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.099 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:19:29.099 17:30:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:19:29.099 00:19:29.099 --- 10.0.0.1 ping statistics --- 00:19:29.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.099 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:19:29.099 17:30:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.099 17:30:07 -- nvmf/common.sh@410 -- # return 0 00:19:29.099 17:30:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:29.099 17:30:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.099 17:30:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:29.099 17:30:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:29.099 17:30:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.099 17:30:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:29.099 17:30:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:29.099 17:30:07 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:29.099 17:30:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:29.099 17:30:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:29.099 17:30:07 -- common/autotest_common.sh@10 -- # set +x 00:19:29.099 17:30:07 -- nvmf/common.sh@469 -- # nvmfpid=4128425 00:19:29.099 17:30:07 -- nvmf/common.sh@470 -- # waitforlisten 4128425 00:19:29.099 17:30:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:29.099 17:30:07 -- common/autotest_common.sh@819 -- # '[' -z 4128425 ']' 00:19:29.099 17:30:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.099 17:30:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:29.099 17:30:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.099 17:30:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:29.099 17:30:07 -- common/autotest_common.sh@10 -- # set +x 00:19:29.099 [2024-07-12 17:30:08.010129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:29.099 [2024-07-12 17:30:08.010191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.099 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.358 [2024-07-12 17:30:08.099065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.358 [2024-07-12 17:30:08.142964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:29.358 [2024-07-12 17:30:08.143112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.358 [2024-07-12 17:30:08.143124] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.358 [2024-07-12 17:30:08.143133] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.358 [2024-07-12 17:30:08.143179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.358 [2024-07-12 17:30:08.143286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.358 [2024-07-12 17:30:08.143359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.358 [2024-07-12 17:30:08.143362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.294 17:30:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:30.294 17:30:08 -- common/autotest_common.sh@852 -- # return 0 00:19:30.294 17:30:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:30.294 17:30:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:30.294 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 17:30:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.294 17:30:08 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:30.294 17:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.294 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 [2024-07-12 17:30:08.994230] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.294 17:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.294 17:30:09 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:30.294 17:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.294 17:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 Malloc0 00:19:30.294 17:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.294 17:30:09 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:30.294 17:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.294 17:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 17:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.294 17:30:09 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:30.294 17:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.294 17:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 17:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.294 17:30:09 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.294 17:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.294 17:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 [2024-07-12 17:30:09.049944] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.294 17:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.294 17:30:09 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:30.294 test case1: single bdev can't be used in multiple subsystems 00:19:30.294 17:30:09 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:30.294 17:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.294 17:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 17:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.294 17:30:09 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:30.294 17:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.294 17:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 17:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.294 17:30:09 -- target/nmic.sh@28 -- # nmic_status=0 00:19:30.294 17:30:09 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:30.294 17:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.294 17:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 [2024-07-12 17:30:09.077839] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:30.294 [2024-07-12 17:30:09.077865] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:30.294 [2024-07-12 17:30:09.077875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.294 request: 00:19:30.294 { 00:19:30.294 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:30.294 "namespace": { 00:19:30.294 "bdev_name": "Malloc0" 00:19:30.294 }, 00:19:30.294 "method": "nvmf_subsystem_add_ns", 00:19:30.294 "req_id": 1 00:19:30.294 } 00:19:30.294 Got JSON-RPC error response 00:19:30.294 response: 00:19:30.294 { 00:19:30.294 "code": -32602, 00:19:30.294 "message": "Invalid parameters" 00:19:30.294 } 00:19:30.294 17:30:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:30.294 17:30:09 -- target/nmic.sh@29 -- # nmic_status=1 00:19:30.294 17:30:09 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:30.294 17:30:09 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:30.294 Adding namespace failed - expected result. 00:19:30.294 17:30:09 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:30.295 test case2: host connect to nvmf target in multiple paths 00:19:30.295 17:30:09 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:30.295 17:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.295 17:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.295 [2024-07-12 17:30:09.089996] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:30.295 17:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.295 17:30:09 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:31.670 17:30:10 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:33.046 17:30:11 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:33.046 17:30:11 -- common/autotest_common.sh@1177 -- # local i=0 00:19:33.046 17:30:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.046 17:30:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:33.046 17:30:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:34.954 17:30:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:34.954 17:30:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:34.954 17:30:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:34.954 17:30:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:34.954 17:30:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:34.954 17:30:13 -- common/autotest_common.sh@1187 -- # return 0 00:19:34.954 17:30:13 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:34.954 [global] 00:19:34.954 thread=1 00:19:34.954 invalidate=1 00:19:34.954 rw=write 00:19:34.954 time_based=1 00:19:34.954 runtime=1 00:19:34.954 ioengine=libaio 00:19:34.954 direct=1 00:19:34.954 bs=4096 00:19:34.954 iodepth=1 00:19:34.954 norandommap=0 00:19:34.954 numjobs=1 00:19:34.954 00:19:34.954 verify_dump=1 00:19:34.954 verify_backlog=512 00:19:34.954 verify_state_save=0 00:19:34.954 do_verify=1 00:19:34.954 verify=crc32c-intel 00:19:34.954 [job0] 00:19:34.954 filename=/dev/nvme0n1 00:19:34.954 Could not set queue depth (nvme0n1) 00:19:35.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.531 fio-3.35 00:19:35.531 Starting 1 thread 00:19:36.466 00:19:36.466 job0: (groupid=0, jobs=1): err= 0: pid=4129910: Fri Jul 12 17:30:15 2024 00:19:36.466 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:19:36.466 slat (nsec): min=9916, max=23967, avg=17374.36, stdev=5793.60 00:19:36.466 clat (usec): min=40503, max=41890, avg=40997.86, stdev=226.25 00:19:36.467 lat (usec): min=40513, max=41913, avg=41015.23, stdev=227.95 00:19:36.467 clat percentiles (usec): 00:19:36.467 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:36.467 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:36.467 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:36.467 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:36.467 | 99.99th=[41681] 00:19:36.467 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:19:36.467 slat (usec): min=9, max=25656, avg=61.73, stdev=1133.34 00:19:36.467 clat (usec): min=132, max=389, avg=175.61, stdev=29.91 00:19:36.467 lat (usec): min=143, max=25984, avg=237.34, stdev=1140.51 00:19:36.467 clat percentiles (usec): 00:19:36.467 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:19:36.467 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:19:36.467 | 70.00th=[ 172], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 241], 00:19:36.467 | 99.00th=[ 249], 99.50th=[ 330], 99.90th=[ 392], 99.95th=[ 392], 00:19:36.467 | 99.99th=[ 392] 00:19:36.467 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:36.467 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:36.467 lat (usec) : 250=94.94%, 500=0.94% 00:19:36.467 lat (msec) : 50=4.12% 00:19:36.467 cpu : usr=0.29%, sys=0.78%, ctx=539, majf=0, minf=2 00:19:36.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.467 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:36.467 00:19:36.467 Run status group 0 (all jobs): 00:19:36.467 READ: bw=85.8KiB/s (87.8kB/s), 85.8KiB/s-85.8KiB/s (87.8kB/s-87.8kB/s), io=88.0KiB (90.1kB), run=1026-1026msec 00:19:36.467 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:19:36.467 00:19:36.467 Disk stats (read/write): 00:19:36.467 nvme0n1: ios=44/512, merge=0/0, ticks=1727/86, in_queue=1813, util=98.80% 00:19:36.467 17:30:15 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:36.725 17:30:15 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:36.725 17:30:15 -- common/autotest_common.sh@1198 -- # local i=0 00:19:36.725 17:30:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:36.725 17:30:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:36.725 17:30:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:36.725 17:30:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:36.725 17:30:15 -- common/autotest_common.sh@1210 -- # return 0 00:19:36.725 17:30:15 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:36.725 17:30:15 -- target/nmic.sh@53 -- # nvmftestfini 00:19:36.725 17:30:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:36.725 17:30:15 -- nvmf/common.sh@116 -- # sync 00:19:36.725 17:30:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:36.725 17:30:15 -- nvmf/common.sh@119 -- # set +e 00:19:36.725 17:30:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:36.725 17:30:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:36.725 rmmod nvme_tcp 00:19:36.725 rmmod nvme_fabrics 00:19:36.725 rmmod nvme_keyring 00:19:36.725 17:30:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:36.725 17:30:15 -- nvmf/common.sh@123 -- # set -e 00:19:36.726 17:30:15 -- nvmf/common.sh@124 -- # return 0 00:19:36.726 17:30:15 -- nvmf/common.sh@477 -- # '[' -n 4128425 ']' 00:19:36.726 17:30:15 -- nvmf/common.sh@478 -- # killprocess 4128425 00:19:36.726 17:30:15 -- common/autotest_common.sh@926 -- # '[' -z 4128425 ']' 00:19:36.726 17:30:15 -- common/autotest_common.sh@930 -- # kill -0 4128425 00:19:36.726 17:30:15 -- common/autotest_common.sh@931 -- # uname 00:19:36.726 17:30:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:36.726 17:30:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4128425 00:19:36.984 17:30:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:36.984 17:30:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:36.984 17:30:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4128425' 00:19:36.984 killing process with pid 4128425 00:19:36.984 17:30:15 -- common/autotest_common.sh@945 -- # kill 4128425 00:19:36.984 17:30:15 -- common/autotest_common.sh@950 -- # wait 4128425 00:19:36.984 17:30:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:36.984 17:30:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:36.984 17:30:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:36.984 17:30:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.984 17:30:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:36.984 17:30:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.984 17:30:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.984 17:30:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.519 17:30:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:39.520 00:19:39.520 real 0m15.840s 00:19:39.520 user 0m44.288s 00:19:39.520 sys 0m5.062s 00:19:39.520 17:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.520 17:30:17 -- common/autotest_common.sh@10 -- # set +x 00:19:39.520 ************************************ 00:19:39.520 END TEST nvmf_nmic 00:19:39.520 ************************************ 00:19:39.520 17:30:18 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:39.520 17:30:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:39.520 17:30:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.520 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:19:39.520 ************************************ 00:19:39.520 START TEST nvmf_fio_target 00:19:39.520 ************************************ 00:19:39.520 17:30:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:39.520 * Looking for test storage... 00:19:39.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.520 17:30:18 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.520 17:30:18 -- nvmf/common.sh@7 -- # uname -s 00:19:39.520 17:30:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.520 17:30:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.520 17:30:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.520 17:30:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.520 17:30:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.520 17:30:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.520 17:30:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.520 17:30:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.520 17:30:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.520 17:30:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.520 17:30:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:39.520 17:30:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:39.520 17:30:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.520 17:30:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.520 17:30:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.520 17:30:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.520 17:30:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.520 17:30:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.520 17:30:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.520 17:30:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.520 17:30:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.520 17:30:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.520 17:30:18 -- paths/export.sh@5 -- # export PATH 00:19:39.520 17:30:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.520 17:30:18 -- nvmf/common.sh@46 -- # : 0 00:19:39.520 17:30:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:39.520 17:30:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:39.520 17:30:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:39.520 17:30:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.520 17:30:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.520 17:30:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:39.520 17:30:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:39.520 17:30:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:39.520 17:30:18 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.520 17:30:18 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.520 17:30:18 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.520 17:30:18 -- target/fio.sh@16 -- # nvmftestinit 00:19:39.520 17:30:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:39.520 17:30:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.520 17:30:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:39.520 17:30:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:39.520 17:30:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:39.520 17:30:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.520 17:30:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.520 17:30:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.520 17:30:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:39.520 17:30:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:39.520 17:30:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:39.520 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:19:44.786 17:30:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:44.786 17:30:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:44.786 17:30:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:44.786 17:30:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:44.786 17:30:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:44.786 17:30:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:44.786 17:30:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:44.786 17:30:23 -- nvmf/common.sh@294 -- # net_devs=() 00:19:44.786 17:30:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:44.786 17:30:23 -- nvmf/common.sh@295 -- # e810=() 00:19:44.786 17:30:23 -- nvmf/common.sh@295 -- # local -ga e810 00:19:44.786 17:30:23 -- nvmf/common.sh@296 -- # x722=() 00:19:44.786 17:30:23 -- nvmf/common.sh@296 -- # local -ga x722 00:19:44.786 17:30:23 -- nvmf/common.sh@297 -- # mlx=() 00:19:44.786 17:30:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:44.786 17:30:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.786 17:30:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:44.786 17:30:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:44.786 17:30:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:44.786 17:30:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:44.786 17:30:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:44.786 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:44.786 17:30:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:44.786 17:30:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:44.786 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:44.786 17:30:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:44.786 17:30:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:44.786 17:30:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.786 17:30:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:44.786 17:30:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.786 17:30:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:44.786 Found net devices under 0000:af:00.0: cvl_0_0 00:19:44.786 17:30:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.786 17:30:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:44.786 17:30:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.786 17:30:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:44.786 17:30:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.786 17:30:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:44.786 Found net devices under 0000:af:00.1: cvl_0_1 00:19:44.786 17:30:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.786 17:30:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:44.786 17:30:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:44.786 17:30:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:44.786 17:30:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:44.786 17:30:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.786 17:30:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.786 17:30:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.786 17:30:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:44.786 17:30:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.786 17:30:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.786 17:30:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:44.786 17:30:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.787 17:30:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.787 17:30:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:44.787 17:30:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:44.787 17:30:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.787 17:30:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.787 17:30:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.787 17:30:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.787 17:30:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:44.787 17:30:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.787 17:30:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.787 17:30:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.787 17:30:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:44.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:19:44.787 00:19:44.787 --- 10.0.0.2 ping statistics --- 00:19:44.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.787 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:19:44.787 17:30:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:19:45.045 00:19:45.045 --- 10.0.0.1 ping statistics --- 00:19:45.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.045 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:19:45.045 17:30:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.045 17:30:23 -- nvmf/common.sh@410 -- # return 0 00:19:45.045 17:30:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:45.045 17:30:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.045 17:30:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:45.045 17:30:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:45.045 17:30:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.045 17:30:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:45.045 17:30:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:45.045 17:30:23 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:45.045 17:30:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:45.045 17:30:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:45.045 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:19:45.045 17:30:23 -- nvmf/common.sh@469 -- # nvmfpid=4133909 00:19:45.045 17:30:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:45.045 17:30:23 -- nvmf/common.sh@470 -- # waitforlisten 4133909 00:19:45.045 17:30:23 -- common/autotest_common.sh@819 -- # '[' -z 4133909 ']' 00:19:45.045 17:30:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.045 17:30:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:45.045 17:30:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.045 17:30:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:45.045 17:30:23 -- common/autotest_common.sh@10 -- # set +x 00:19:45.045 [2024-07-12 17:30:23.853288] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:45.045 [2024-07-12 17:30:23.853342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.045 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.045 [2024-07-12 17:30:23.940284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.045 [2024-07-12 17:30:23.982722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:45.045 [2024-07-12 17:30:23.982865] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.045 [2024-07-12 17:30:23.982876] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.045 [2024-07-12 17:30:23.982886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.045 [2024-07-12 17:30:23.982926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.045 [2024-07-12 17:30:23.983030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.045 [2024-07-12 17:30:23.983097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.045 [2024-07-12 17:30:23.983099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.979 17:30:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:45.979 17:30:24 -- common/autotest_common.sh@852 -- # return 0 00:19:45.979 17:30:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:45.979 17:30:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:45.979 17:30:24 -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 17:30:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.979 17:30:24 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:46.237 [2024-07-12 17:30:25.034852] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.237 17:30:25 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.496 17:30:25 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:46.496 17:30:25 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.754 17:30:25 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:46.754 17:30:25 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.754 17:30:25 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:46.754 17:30:25 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.013 17:30:25 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:47.013 17:30:25 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:47.272 17:30:26 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.531 17:30:26 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:47.531 17:30:26 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.790 17:30:26 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:47.790 17:30:26 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:48.048 17:30:26 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:48.048 17:30:26 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:48.306 17:30:27 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:48.306 17:30:27 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:48.306 17:30:27 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.564 17:30:27 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:48.564 17:30:27 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:48.821 17:30:27 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.821 [2024-07-12 17:30:27.735338] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.821 17:30:27 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:49.079 17:30:28 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:49.337 17:30:28 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:50.714 17:30:29 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:50.714 17:30:29 -- common/autotest_common.sh@1177 -- # local i=0 00:19:50.714 17:30:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:50.714 17:30:29 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:50.714 17:30:29 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:50.714 17:30:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:52.618 17:30:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:52.618 17:30:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:52.618 17:30:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:52.618 17:30:31 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:52.618 17:30:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:52.618 17:30:31 -- common/autotest_common.sh@1187 -- # return 0 00:19:52.618 17:30:31 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:52.618 [global] 00:19:52.618 thread=1 00:19:52.618 invalidate=1 00:19:52.618 rw=write 00:19:52.618 time_based=1 00:19:52.618 runtime=1 00:19:52.618 ioengine=libaio 00:19:52.618 direct=1 00:19:52.618 bs=4096 00:19:52.618 iodepth=1 00:19:52.618 norandommap=0 00:19:52.618 numjobs=1 00:19:52.618 00:19:52.618 verify_dump=1 00:19:52.618 verify_backlog=512 00:19:52.618 verify_state_save=0 00:19:52.618 do_verify=1 00:19:52.618 verify=crc32c-intel 00:19:52.618 [job0] 00:19:52.618 filename=/dev/nvme0n1 00:19:52.618 [job1] 00:19:52.618 filename=/dev/nvme0n2 00:19:52.618 [job2] 00:19:52.618 filename=/dev/nvme0n3 00:19:52.618 [job3] 00:19:52.618 filename=/dev/nvme0n4 00:19:52.915 Could not set queue depth (nvme0n1) 00:19:52.915 Could not set queue depth (nvme0n2) 00:19:52.915 Could not set queue depth (nvme0n3) 00:19:52.915 Could not set queue depth (nvme0n4) 00:19:53.191 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.191 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.191 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.191 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.191 fio-3.35 00:19:53.191 Starting 4 threads 00:19:54.601 00:19:54.601 job0: (groupid=0, jobs=1): err= 0: pid=4135465: Fri Jul 12 17:30:33 2024 00:19:54.601 read: IOPS=1802, BW=7209KiB/s (7382kB/s)(7216KiB/1001msec) 00:19:54.601 slat (nsec): min=6199, max=38762, avg=7328.17, stdev=1161.39 00:19:54.601 clat (usec): min=223, max=1034, avg=297.31, stdev=75.90 00:19:54.601 lat (usec): min=230, max=1041, avg=304.64, stdev=75.97 00:19:54.601 clat percentiles (usec): 00:19:54.601 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:19:54.601 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:19:54.601 | 70.00th=[ 285], 80.00th=[ 330], 90.00th=[ 453], 95.00th=[ 474], 00:19:54.601 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 873], 99.95th=[ 1037], 00:19:54.601 | 99.99th=[ 1037] 00:19:54.601 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:54.601 slat (nsec): min=9083, max=63875, avg=10949.54, stdev=3443.13 00:19:54.601 clat (usec): min=159, max=369, avg=204.77, stdev=25.33 00:19:54.601 lat (usec): min=172, max=383, avg=215.72, stdev=26.30 00:19:54.601 clat percentiles (usec): 00:19:54.601 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:19:54.601 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:19:54.601 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 258], 00:19:54.601 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 355], 99.95th=[ 355], 00:19:54.601 | 99.99th=[ 371] 00:19:54.601 bw ( KiB/s): min= 8192, max= 8192, per=31.90%, avg=8192.00, stdev= 0.00, samples=1 00:19:54.601 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:54.601 lat (usec) : 250=57.50%, 500=42.13%, 750=0.31%, 1000=0.03% 00:19:54.601 lat (msec) : 2=0.03% 00:19:54.601 cpu : usr=2.70%, sys=3.50%, ctx=3855, majf=0, minf=2 00:19:54.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.602 issued rwts: total=1804,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.602 job1: (groupid=0, jobs=1): err= 0: pid=4135466: Fri Jul 12 17:30:33 2024 00:19:54.602 read: IOPS=254, BW=1020KiB/s (1044kB/s)(1044KiB/1024msec) 00:19:54.602 slat (nsec): min=6586, max=27824, avg=7794.68, stdev=2376.62 00:19:54.602 clat (usec): min=296, max=41105, avg=3540.79, stdev=10808.68 00:19:54.602 lat (usec): min=313, max=41126, avg=3548.59, stdev=10810.04 00:19:54.602 clat percentiles (usec): 00:19:54.602 | 1.00th=[ 306], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 375], 00:19:54.602 | 30.00th=[ 420], 40.00th=[ 441], 50.00th=[ 453], 60.00th=[ 465], 00:19:54.602 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 515], 95.00th=[41157], 00:19:54.602 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:54.602 | 99.99th=[41157] 00:19:54.602 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:19:54.602 slat (nsec): min=9068, max=38147, avg=11489.05, stdev=3855.23 00:19:54.602 clat (usec): min=130, max=267, avg=175.12, stdev=18.97 00:19:54.602 lat (usec): min=140, max=304, avg=186.61, stdev=19.75 00:19:54.602 clat percentiles (usec): 00:19:54.602 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 159], 00:19:54.602 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:19:54.602 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 208], 00:19:54.602 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 269], 99.95th=[ 269], 00:19:54.602 | 99.99th=[ 269] 00:19:54.602 bw ( KiB/s): min= 4096, max= 4096, per=15.95%, avg=4096.00, stdev= 0.00, samples=1 00:19:54.602 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:54.602 lat (usec) : 250=65.98%, 500=29.50%, 750=1.94% 00:19:54.602 lat (msec) : 50=2.59% 00:19:54.602 cpu : usr=0.78%, sys=0.39%, ctx=773, majf=0, minf=1 00:19:54.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.602 issued rwts: total=261,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.602 job2: (groupid=0, jobs=1): err= 0: pid=4135467: Fri Jul 12 17:30:33 2024 00:19:54.602 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:54.602 slat (nsec): min=7378, max=39401, avg=8268.17, stdev=1453.81 00:19:54.602 clat (usec): min=230, max=977, avg=307.91, stdev=57.53 00:19:54.602 lat (usec): min=238, max=985, avg=316.18, stdev=57.55 00:19:54.602 clat percentiles (usec): 00:19:54.602 | 1.00th=[ 245], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 00:19:54.602 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 302], 00:19:54.602 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 457], 00:19:54.602 | 99.00th=[ 494], 99.50th=[ 519], 99.90th=[ 578], 99.95th=[ 979], 00:19:54.602 | 99.99th=[ 979] 00:19:54.602 write: IOPS=1965, BW=7860KiB/s (8049kB/s)(7868KiB/1001msec); 0 zone resets 00:19:54.602 slat (usec): min=10, max=24107, avg=24.48, stdev=543.30 00:19:54.602 clat (usec): min=175, max=475, avg=230.50, stdev=34.38 00:19:54.602 lat (usec): min=187, max=24550, avg=254.98, stdev=549.16 00:19:54.602 clat percentiles (usec): 00:19:54.602 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:19:54.602 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:19:54.602 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 281], 95.00th=[ 306], 00:19:54.602 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 445], 99.95th=[ 478], 00:19:54.602 | 99.99th=[ 478] 00:19:54.602 bw ( KiB/s): min= 8192, max= 8192, per=31.90%, avg=8192.00, stdev= 0.00, samples=1 00:19:54.602 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:54.602 lat (usec) : 250=45.28%, 500=54.35%, 750=0.34%, 1000=0.03% 00:19:54.602 cpu : usr=2.90%, sys=5.80%, ctx=3506, majf=0, minf=1 00:19:54.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.602 issued rwts: total=1536,1967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.602 job3: (groupid=0, jobs=1): err= 0: pid=4135468: Fri Jul 12 17:30:33 2024 00:19:54.602 read: IOPS=1810, BW=7241KiB/s (7415kB/s)(7248KiB/1001msec) 00:19:54.602 slat (nsec): min=6058, max=42570, avg=7457.81, stdev=1659.25 00:19:54.602 clat (usec): min=233, max=514, avg=290.38, stdev=33.64 00:19:54.602 lat (usec): min=240, max=521, avg=297.83, stdev=34.09 00:19:54.602 clat percentiles (usec): 00:19:54.602 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:19:54.602 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 293], 00:19:54.602 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 338], 00:19:54.602 | 99.00th=[ 437], 99.50th=[ 461], 99.90th=[ 502], 99.95th=[ 515], 00:19:54.602 | 99.99th=[ 515] 00:19:54.602 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:54.602 slat (nsec): min=9314, max=43861, avg=10738.54, stdev=1796.72 00:19:54.602 clat (usec): min=136, max=361, avg=209.63, stdev=23.71 00:19:54.602 lat (usec): min=146, max=395, avg=220.36, stdev=23.80 00:19:54.602 clat percentiles (usec): 00:19:54.602 | 1.00th=[ 151], 5.00th=[ 169], 10.00th=[ 184], 20.00th=[ 192], 00:19:54.602 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:19:54.602 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 247], 00:19:54.602 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 334], 00:19:54.602 | 99.99th=[ 363] 00:19:54.602 bw ( KiB/s): min= 8192, max= 8192, per=31.90%, avg=8192.00, stdev= 0.00, samples=1 00:19:54.602 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:54.602 lat (usec) : 250=53.16%, 500=46.79%, 750=0.05% 00:19:54.602 cpu : usr=1.60%, sys=4.20%, ctx=3861, majf=0, minf=1 00:19:54.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.602 issued rwts: total=1812,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.602 00:19:54.602 Run status group 0 (all jobs): 00:19:54.602 READ: bw=20.6MiB/s (21.7MB/s), 1020KiB/s-7241KiB/s (1044kB/s-7415kB/s), io=21.1MiB (22.2MB), run=1001-1024msec 00:19:54.602 WRITE: bw=25.1MiB/s (26.3MB/s), 2000KiB/s-8184KiB/s (2048kB/s-8380kB/s), io=25.7MiB (26.9MB), run=1001-1024msec 00:19:54.602 00:19:54.602 Disk stats (read/write): 00:19:54.602 nvme0n1: ios=1586/1742, merge=0/0, ticks=474/342, in_queue=816, util=87.17% 00:19:54.602 nvme0n2: ios=279/512, merge=0/0, ticks=737/90, in_queue=827, util=87.39% 00:19:54.602 nvme0n3: ios=1445/1536, merge=0/0, ticks=1421/317, in_queue=1738, util=98.54% 00:19:54.602 nvme0n4: ios=1560/1742, merge=0/0, ticks=1427/357, in_queue=1784, util=98.85% 00:19:54.602 17:30:33 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:54.602 [global] 00:19:54.602 thread=1 00:19:54.602 invalidate=1 00:19:54.602 rw=randwrite 00:19:54.602 time_based=1 00:19:54.602 runtime=1 00:19:54.602 ioengine=libaio 00:19:54.602 direct=1 00:19:54.602 bs=4096 00:19:54.602 iodepth=1 00:19:54.602 norandommap=0 00:19:54.602 numjobs=1 00:19:54.602 00:19:54.602 verify_dump=1 00:19:54.602 verify_backlog=512 00:19:54.602 verify_state_save=0 00:19:54.602 do_verify=1 00:19:54.602 verify=crc32c-intel 00:19:54.602 [job0] 00:19:54.602 filename=/dev/nvme0n1 00:19:54.602 [job1] 00:19:54.602 filename=/dev/nvme0n2 00:19:54.602 [job2] 00:19:54.602 filename=/dev/nvme0n3 00:19:54.602 [job3] 00:19:54.602 filename=/dev/nvme0n4 00:19:54.602 Could not set queue depth (nvme0n1) 00:19:54.602 Could not set queue depth (nvme0n2) 00:19:54.602 Could not set queue depth (nvme0n3) 00:19:54.602 Could not set queue depth (nvme0n4) 00:19:54.860 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:54.860 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:54.860 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:54.860 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:54.860 fio-3.35 00:19:54.860 Starting 4 threads 00:19:56.251 00:19:56.251 job0: (groupid=0, jobs=1): err= 0: pid=4135907: Fri Jul 12 17:30:34 2024 00:19:56.251 read: IOPS=219, BW=878KiB/s (899kB/s)(912KiB/1039msec) 00:19:56.251 slat (nsec): min=7120, max=18394, avg=8113.23, stdev=1320.45 00:19:56.251 clat (usec): min=244, max=42069, avg=3946.90, stdev=11556.48 00:19:56.251 lat (usec): min=252, max=42079, avg=3955.01, stdev=11557.31 00:19:56.251 clat percentiles (usec): 00:19:56.251 | 1.00th=[ 258], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 326], 00:19:56.251 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 375], 00:19:56.251 | 70.00th=[ 392], 80.00th=[ 474], 90.00th=[ 529], 95.00th=[41157], 00:19:56.251 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:56.251 | 99.99th=[42206] 00:19:56.251 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:19:56.251 slat (nsec): min=9253, max=34718, avg=10943.69, stdev=1692.96 00:19:56.251 clat (usec): min=177, max=512, avg=251.60, stdev=26.42 00:19:56.251 lat (usec): min=187, max=522, avg=262.54, stdev=26.75 00:19:56.251 clat percentiles (usec): 00:19:56.251 | 1.00th=[ 192], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:19:56.251 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:19:56.251 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:19:56.251 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 515], 99.95th=[ 515], 00:19:56.251 | 99.99th=[ 515] 00:19:56.251 bw ( KiB/s): min= 4096, max= 4096, per=18.85%, avg=4096.00, stdev= 0.00, samples=1 00:19:56.251 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:56.251 lat (usec) : 250=36.89%, 500=58.38%, 750=2.03% 00:19:56.251 lat (msec) : 50=2.70% 00:19:56.251 cpu : usr=0.48%, sys=0.67%, ctx=742, majf=0, minf=1 00:19:56.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.252 issued rwts: total=228,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.252 job1: (groupid=0, jobs=1): err= 0: pid=4135915: Fri Jul 12 17:30:34 2024 00:19:56.252 read: IOPS=1594, BW=6378KiB/s (6531kB/s)(6384KiB/1001msec) 00:19:56.252 slat (nsec): min=7936, max=43564, avg=9153.84, stdev=1835.40 00:19:56.252 clat (usec): min=211, max=572, avg=314.32, stdev=71.94 00:19:56.252 lat (usec): min=220, max=581, avg=323.48, stdev=72.15 00:19:56.252 clat percentiles (usec): 00:19:56.252 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 265], 00:19:56.252 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:19:56.252 | 70.00th=[ 314], 80.00th=[ 355], 90.00th=[ 461], 95.00th=[ 486], 00:19:56.252 | 99.00th=[ 515], 99.50th=[ 523], 99.90th=[ 562], 99.95th=[ 570], 00:19:56.252 | 99.99th=[ 570] 00:19:56.252 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:56.252 slat (nsec): min=10866, max=99162, avg=12678.03, stdev=2629.76 00:19:56.252 clat (usec): min=168, max=1074, avg=217.87, stdev=32.27 00:19:56.252 lat (usec): min=181, max=1087, avg=230.55, stdev=32.76 00:19:56.252 clat percentiles (usec): 00:19:56.252 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 194], 00:19:56.252 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:19:56.252 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 265], 00:19:56.252 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 351], 99.95th=[ 396], 00:19:56.252 | 99.99th=[ 1074] 00:19:56.252 bw ( KiB/s): min= 8192, max= 8192, per=37.70%, avg=8192.00, stdev= 0.00, samples=1 00:19:56.252 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:56.252 lat (usec) : 250=53.18%, 500=45.66%, 750=1.13% 00:19:56.252 lat (msec) : 2=0.03% 00:19:56.252 cpu : usr=4.20%, sys=5.30%, ctx=3646, majf=0, minf=1 00:19:56.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.252 issued rwts: total=1596,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.252 job2: (groupid=0, jobs=1): err= 0: pid=4135930: Fri Jul 12 17:30:34 2024 00:19:56.252 read: IOPS=1516, BW=6066KiB/s (6212kB/s)(6072KiB/1001msec) 00:19:56.252 slat (nsec): min=6437, max=21028, avg=7234.15, stdev=834.02 00:19:56.252 clat (usec): min=269, max=660, avg=379.70, stdev=36.06 00:19:56.252 lat (usec): min=276, max=668, avg=386.94, stdev=36.05 00:19:56.252 clat percentiles (usec): 00:19:56.252 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:19:56.252 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:19:56.252 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 445], 00:19:56.252 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 619], 99.95th=[ 660], 00:19:56.252 | 99.99th=[ 660] 00:19:56.252 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:56.252 slat (nsec): min=9406, max=36519, avg=10466.11, stdev=1208.69 00:19:56.252 clat (usec): min=180, max=650, avg=253.73, stdev=27.41 00:19:56.252 lat (usec): min=191, max=660, avg=264.20, stdev=27.75 00:19:56.252 clat percentiles (usec): 00:19:56.252 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:19:56.252 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:19:56.252 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:19:56.252 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 611], 99.95th=[ 652], 00:19:56.252 | 99.99th=[ 652] 00:19:56.252 bw ( KiB/s): min= 8192, max= 8192, per=37.70%, avg=8192.00, stdev= 0.00, samples=1 00:19:56.252 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:56.252 lat (usec) : 250=23.15%, 500=75.87%, 750=0.98% 00:19:56.252 cpu : usr=0.70%, sys=3.70%, ctx=3056, majf=0, minf=1 00:19:56.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.252 issued rwts: total=1518,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.252 job3: (groupid=0, jobs=1): err= 0: pid=4135938: Fri Jul 12 17:30:34 2024 00:19:56.252 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:56.252 slat (nsec): min=6353, max=27275, avg=7244.94, stdev=955.36 00:19:56.252 clat (usec): min=267, max=556, avg=377.56, stdev=32.50 00:19:56.252 lat (usec): min=273, max=564, avg=384.81, stdev=32.48 00:19:56.252 clat percentiles (usec): 00:19:56.252 | 1.00th=[ 310], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:19:56.252 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:19:56.252 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 457], 00:19:56.252 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 537], 99.95th=[ 553], 00:19:56.252 | 99.99th=[ 553] 00:19:56.252 write: IOPS=1546, BW=6186KiB/s (6334kB/s)(6192KiB/1001msec); 0 zone resets 00:19:56.252 slat (nsec): min=9154, max=41059, avg=10493.52, stdev=1760.24 00:19:56.252 clat (usec): min=178, max=370, avg=249.16, stdev=22.53 00:19:56.252 lat (usec): min=188, max=411, avg=259.66, stdev=22.43 00:19:56.252 clat percentiles (usec): 00:19:56.252 | 1.00th=[ 192], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 235], 00:19:56.252 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:19:56.252 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:19:56.252 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 371], 00:19:56.252 | 99.99th=[ 371] 00:19:56.252 bw ( KiB/s): min= 8192, max= 8192, per=37.70%, avg=8192.00, stdev= 0.00, samples=1 00:19:56.252 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:56.252 lat (usec) : 250=28.60%, 500=71.17%, 750=0.23% 00:19:56.252 cpu : usr=1.80%, sys=2.70%, ctx=3084, majf=0, minf=2 00:19:56.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.252 issued rwts: total=1536,1548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.252 00:19:56.252 Run status group 0 (all jobs): 00:19:56.252 READ: bw=18.3MiB/s (19.2MB/s), 878KiB/s-6378KiB/s (899kB/s-6531kB/s), io=19.1MiB (20.0MB), run=1001-1039msec 00:19:56.252 WRITE: bw=21.2MiB/s (22.2MB/s), 1971KiB/s-8184KiB/s (2018kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1039msec 00:19:56.252 00:19:56.252 Disk stats (read/write): 00:19:56.252 nvme0n1: ios=275/512, merge=0/0, ticks=1047/129, in_queue=1176, util=99.90% 00:19:56.252 nvme0n2: ios=1412/1536, merge=0/0, ticks=425/325, in_queue=750, util=84.87% 00:19:56.252 nvme0n3: ios=1110/1536, merge=0/0, ticks=1353/388, in_queue=1741, util=100.00% 00:19:56.252 nvme0n4: ios=1100/1536, merge=0/0, ticks=412/376, in_queue=788, util=89.44% 00:19:56.252 17:30:34 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:56.252 [global] 00:19:56.252 thread=1 00:19:56.252 invalidate=1 00:19:56.252 rw=write 00:19:56.252 time_based=1 00:19:56.252 runtime=1 00:19:56.252 ioengine=libaio 00:19:56.252 direct=1 00:19:56.252 bs=4096 00:19:56.252 iodepth=128 00:19:56.252 norandommap=0 00:19:56.252 numjobs=1 00:19:56.252 00:19:56.252 verify_dump=1 00:19:56.252 verify_backlog=512 00:19:56.252 verify_state_save=0 00:19:56.252 do_verify=1 00:19:56.252 verify=crc32c-intel 00:19:56.252 [job0] 00:19:56.252 filename=/dev/nvme0n1 00:19:56.252 [job1] 00:19:56.252 filename=/dev/nvme0n2 00:19:56.252 [job2] 00:19:56.252 filename=/dev/nvme0n3 00:19:56.252 [job3] 00:19:56.252 filename=/dev/nvme0n4 00:19:56.252 Could not set queue depth (nvme0n1) 00:19:56.252 Could not set queue depth (nvme0n2) 00:19:56.252 Could not set queue depth (nvme0n3) 00:19:56.252 Could not set queue depth (nvme0n4) 00:19:56.508 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.508 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.509 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.509 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.509 fio-3.35 00:19:56.509 Starting 4 threads 00:19:57.898 00:19:57.898 job0: (groupid=0, jobs=1): err= 0: pid=4136376: Fri Jul 12 17:30:36 2024 00:19:57.898 read: IOPS=1716, BW=6864KiB/s (7029kB/s)(7228KiB/1053msec) 00:19:57.898 slat (usec): min=2, max=21769, avg=147.93, stdev=898.99 00:19:57.898 clat (usec): min=8482, max=86948, avg=21171.52, stdev=16222.87 00:19:57.898 lat (usec): min=8491, max=89120, avg=21319.45, stdev=16263.96 00:19:57.898 clat percentiles (usec): 00:19:57.898 | 1.00th=[ 9765], 5.00th=[11338], 10.00th=[12518], 20.00th=[14484], 00:19:57.898 | 30.00th=[15533], 40.00th=[16188], 50.00th=[16712], 60.00th=[17171], 00:19:57.898 | 70.00th=[19268], 80.00th=[20317], 90.00th=[24511], 95.00th=[71828], 00:19:57.898 | 99.00th=[86508], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:19:57.898 | 99.99th=[86508] 00:19:57.898 write: IOPS=1944, BW=7780KiB/s (7966kB/s)(8192KiB/1053msec); 0 zone resets 00:19:57.898 slat (usec): min=4, max=33715, avg=351.36, stdev=1945.47 00:19:57.898 clat (msec): min=9, max=168, avg=46.41, stdev=36.44 00:19:57.898 lat (msec): min=9, max=168, avg=46.76, stdev=36.68 00:19:57.898 clat percentiles (msec): 00:19:57.898 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 14], 00:19:57.898 | 30.00th=[ 26], 40.00th=[ 32], 50.00th=[ 35], 60.00th=[ 43], 00:19:57.898 | 70.00th=[ 53], 80.00th=[ 65], 90.00th=[ 93], 95.00th=[ 150], 00:19:57.898 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 169], 99.95th=[ 169], 00:19:57.898 | 99.99th=[ 169] 00:19:57.898 bw ( KiB/s): min= 8192, max= 8192, per=24.78%, avg=8192.00, stdev= 0.00, samples=2 00:19:57.898 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:19:57.898 lat (msec) : 10=0.86%, 20=47.52%, 50=31.57%, 100=16.13%, 250=3.92% 00:19:57.898 cpu : usr=2.47%, sys=2.57%, ctx=206, majf=0, minf=1 00:19:57.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:57.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.898 issued rwts: total=1807,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.898 job1: (groupid=0, jobs=1): err= 0: pid=4136386: Fri Jul 12 17:30:36 2024 00:19:57.898 read: IOPS=2212, BW=8850KiB/s (9062kB/s)(8956KiB/1012msec) 00:19:57.898 slat (usec): min=2, max=22813, avg=202.47, stdev=1326.44 00:19:57.898 clat (usec): min=7607, max=78106, avg=23089.02, stdev=11377.20 00:19:57.898 lat (usec): min=7616, max=78115, avg=23291.49, stdev=11501.82 00:19:57.898 clat percentiles (usec): 00:19:57.898 | 1.00th=[12518], 5.00th=[12518], 10.00th=[13435], 20.00th=[17433], 00:19:57.898 | 30.00th=[17695], 40.00th=[18744], 50.00th=[19006], 60.00th=[19530], 00:19:57.898 | 70.00th=[23200], 80.00th=[24511], 90.00th=[39584], 95.00th=[51119], 00:19:57.898 | 99.00th=[66323], 99.50th=[72877], 99.90th=[78119], 99.95th=[78119], 00:19:57.898 | 99.99th=[78119] 00:19:57.898 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:19:57.898 slat (usec): min=3, max=16709, avg=206.58, stdev=1061.49 00:19:57.898 clat (usec): min=5685, max=78099, avg=29902.66, stdev=13682.07 00:19:57.898 lat (usec): min=5696, max=78108, avg=30109.25, stdev=13792.10 00:19:57.898 clat percentiles (usec): 00:19:57.898 | 1.00th=[ 8291], 5.00th=[14746], 10.00th=[15401], 20.00th=[16712], 00:19:57.898 | 30.00th=[17433], 40.00th=[19268], 50.00th=[29230], 60.00th=[36963], 00:19:57.898 | 70.00th=[40633], 80.00th=[43779], 90.00th=[47449], 95.00th=[52691], 00:19:57.898 | 99.00th=[58983], 99.50th=[61604], 99.90th=[61604], 99.95th=[78119], 00:19:57.898 | 99.99th=[78119] 00:19:57.898 bw ( KiB/s): min= 9936, max=10544, per=30.97%, avg=10240.00, stdev=429.92, samples=2 00:19:57.898 iops : min= 2484, max= 2636, avg=2560.00, stdev=107.48, samples=2 00:19:57.898 lat (msec) : 10=0.90%, 20=51.49%, 50=41.72%, 100=5.90% 00:19:57.898 cpu : usr=2.87%, sys=3.56%, ctx=262, majf=0, minf=1 00:19:57.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:57.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.898 issued rwts: total=2239,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.898 job2: (groupid=0, jobs=1): err= 0: pid=4136401: Fri Jul 12 17:30:36 2024 00:19:57.898 read: IOPS=2210, BW=8841KiB/s (9053kB/s)(8956KiB/1013msec) 00:19:57.898 slat (usec): min=2, max=19695, avg=206.63, stdev=1366.35 00:19:57.898 clat (usec): min=6547, max=78229, avg=23445.37, stdev=12428.53 00:19:57.898 lat (usec): min=6556, max=78240, avg=23652.00, stdev=12555.86 00:19:57.898 clat percentiles (usec): 00:19:57.898 | 1.00th=[ 9241], 5.00th=[12780], 10.00th=[13435], 20.00th=[15664], 00:19:57.898 | 30.00th=[17433], 40.00th=[17695], 50.00th=[19268], 60.00th=[21365], 00:19:57.898 | 70.00th=[22676], 80.00th=[25822], 90.00th=[41681], 95.00th=[53740], 00:19:57.898 | 99.00th=[66323], 99.50th=[72877], 99.90th=[78119], 99.95th=[78119], 00:19:57.898 | 99.99th=[78119] 00:19:57.898 write: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec); 0 zone resets 00:19:57.898 slat (usec): min=4, max=17327, avg=203.49, stdev=1019.92 00:19:57.898 clat (usec): min=4512, max=78236, avg=29644.41, stdev=13883.94 00:19:57.898 lat (usec): min=4537, max=78248, avg=29847.90, stdev=13989.65 00:19:57.898 clat percentiles (usec): 00:19:57.898 | 1.00th=[ 6456], 5.00th=[12911], 10.00th=[14746], 20.00th=[16450], 00:19:57.898 | 30.00th=[18220], 40.00th=[19268], 50.00th=[27919], 60.00th=[35390], 00:19:57.898 | 70.00th=[40109], 80.00th=[43779], 90.00th=[48497], 95.00th=[52691], 00:19:57.898 | 99.00th=[59507], 99.50th=[61080], 99.90th=[64226], 99.95th=[78119], 00:19:57.898 | 99.99th=[78119] 00:19:57.898 bw ( KiB/s): min= 9936, max=10544, per=30.97%, avg=10240.00, stdev=429.92, samples=2 00:19:57.898 iops : min= 2484, max= 2636, avg=2560.00, stdev=107.48, samples=2 00:19:57.898 lat (msec) : 10=2.73%, 20=43.49%, 50=47.07%, 100=6.71% 00:19:57.898 cpu : usr=2.47%, sys=3.95%, ctx=264, majf=0, minf=1 00:19:57.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:57.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.898 issued rwts: total=2239,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.898 job3: (groupid=0, jobs=1): err= 0: pid=4136407: Fri Jul 12 17:30:36 2024 00:19:57.898 read: IOPS=1313, BW=5256KiB/s (5382kB/s)(5324KiB/1013msec) 00:19:57.898 slat (usec): min=2, max=25612, avg=258.41, stdev=1676.33 00:19:57.898 clat (usec): min=10268, max=94962, avg=35098.03, stdev=18702.88 00:19:57.898 lat (msec): min=17, max=105, avg=35.36, stdev=18.86 00:19:57.898 clat percentiles (usec): 00:19:57.898 | 1.00th=[17171], 5.00th=[17171], 10.00th=[17171], 20.00th=[17433], 00:19:57.898 | 30.00th=[19006], 40.00th=[25822], 50.00th=[30540], 60.00th=[35390], 00:19:57.898 | 70.00th=[41681], 80.00th=[53216], 90.00th=[63177], 95.00th=[71828], 00:19:57.898 | 99.00th=[89654], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:19:57.898 | 99.99th=[94897] 00:19:57.898 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:19:57.898 slat (usec): min=4, max=41107, avg=419.97, stdev=2577.06 00:19:57.898 clat (msec): min=13, max=225, avg=41.16, stdev=25.23 00:19:57.898 lat (msec): min=13, max=225, avg=41.58, stdev=25.69 00:19:57.898 clat percentiles (msec): 00:19:57.898 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 15], 00:19:57.898 | 30.00th=[ 33], 40.00th=[ 37], 50.00th=[ 41], 60.00th=[ 44], 00:19:57.898 | 70.00th=[ 46], 80.00th=[ 50], 90.00th=[ 65], 95.00th=[ 83], 00:19:57.898 | 99.00th=[ 165], 99.50th=[ 192], 99.90th=[ 226], 99.95th=[ 226], 00:19:57.898 | 99.99th=[ 226] 00:19:57.898 bw ( KiB/s): min= 5960, max= 6328, per=18.58%, avg=6144.00, stdev=260.22, samples=2 00:19:57.898 iops : min= 1490, max= 1582, avg=1536.00, stdev=65.05, samples=2 00:19:57.898 lat (msec) : 20=28.32%, 50=49.95%, 100=20.61%, 250=1.12% 00:19:57.898 cpu : usr=2.27%, sys=1.98%, ctx=193, majf=0, minf=1 00:19:57.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:57.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.898 issued rwts: total=1331,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.898 00:19:57.898 Run status group 0 (all jobs): 00:19:57.898 READ: bw=28.3MiB/s (29.6MB/s), 5256KiB/s-8850KiB/s (5382kB/s-9062kB/s), io=29.8MiB (31.2MB), run=1012-1053msec 00:19:57.898 WRITE: bw=32.3MiB/s (33.9MB/s), 6065KiB/s-9.88MiB/s (6211kB/s-10.4MB/s), io=34.0MiB (35.7MB), run=1012-1053msec 00:19:57.898 00:19:57.898 Disk stats (read/write): 00:19:57.898 nvme0n1: ios=1264/1536, merge=0/0, ticks=10261/42593, in_queue=52854, util=87.27% 00:19:57.898 nvme0n2: ios=2065/2255, merge=0/0, ticks=43211/60304, in_queue=103515, util=97.66% 00:19:57.898 nvme0n3: ios=2105/2255, merge=0/0, ticks=43645/59759, in_queue=103404, util=91.05% 00:19:57.898 nvme0n4: ios=1060/1295, merge=0/0, ticks=17710/22773, in_queue=40483, util=98.11% 00:19:57.898 17:30:36 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:57.898 [global] 00:19:57.898 thread=1 00:19:57.898 invalidate=1 00:19:57.898 rw=randwrite 00:19:57.898 time_based=1 00:19:57.898 runtime=1 00:19:57.898 ioengine=libaio 00:19:57.898 direct=1 00:19:57.898 bs=4096 00:19:57.898 iodepth=128 00:19:57.898 norandommap=0 00:19:57.898 numjobs=1 00:19:57.898 00:19:57.898 verify_dump=1 00:19:57.898 verify_backlog=512 00:19:57.898 verify_state_save=0 00:19:57.898 do_verify=1 00:19:57.898 verify=crc32c-intel 00:19:57.898 [job0] 00:19:57.898 filename=/dev/nvme0n1 00:19:57.898 [job1] 00:19:57.898 filename=/dev/nvme0n2 00:19:57.898 [job2] 00:19:57.898 filename=/dev/nvme0n3 00:19:57.898 [job3] 00:19:57.898 filename=/dev/nvme0n4 00:19:57.898 Could not set queue depth (nvme0n1) 00:19:57.898 Could not set queue depth (nvme0n2) 00:19:57.898 Could not set queue depth (nvme0n3) 00:19:57.898 Could not set queue depth (nvme0n4) 00:19:58.158 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.158 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.158 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.158 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.158 fio-3.35 00:19:58.158 Starting 4 threads 00:19:59.539 00:19:59.539 job0: (groupid=0, jobs=1): err= 0: pid=4136859: Fri Jul 12 17:30:38 2024 00:19:59.539 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:19:59.539 slat (nsec): min=1918, max=52083k, avg=253539.62, stdev=1941829.07 00:19:59.539 clat (msec): min=8, max=113, avg=28.94, stdev=19.64 00:19:59.539 lat (msec): min=8, max=113, avg=29.19, stdev=19.77 00:19:59.539 clat percentiles (msec): 00:19:59.539 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:19:59.539 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 25], 60.00th=[ 28], 00:19:59.539 | 70.00th=[ 29], 80.00th=[ 33], 90.00th=[ 44], 95.00th=[ 66], 00:19:59.539 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:19:59.539 | 99.99th=[ 114] 00:19:59.539 write: IOPS=2445, BW=9783KiB/s (10.0MB/s)(9832KiB/1005msec); 0 zone resets 00:19:59.539 slat (usec): min=2, max=15503, avg=186.24, stdev=1061.96 00:19:59.539 clat (msec): min=3, max=160, avg=27.36, stdev=27.82 00:19:59.539 lat (msec): min=4, max=160, avg=27.55, stdev=27.97 00:19:59.539 clat percentiles (msec): 00:19:59.539 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 13], 00:19:59.539 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 22], 00:19:59.539 | 70.00th=[ 28], 80.00th=[ 34], 90.00th=[ 43], 95.00th=[ 104], 00:19:59.539 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 161], 00:19:59.539 | 99.99th=[ 161] 00:19:59.539 bw ( KiB/s): min= 7840, max=10808, per=14.40%, avg=9324.00, stdev=2098.69, samples=2 00:19:59.539 iops : min= 1960, max= 2702, avg=2331.00, stdev=524.67, samples=2 00:19:59.539 lat (msec) : 4=0.02%, 10=1.98%, 20=43.12%, 50=45.94%, 100=4.24% 00:19:59.539 lat (msec) : 250=4.70% 00:19:59.539 cpu : usr=1.89%, sys=2.69%, ctx=288, majf=0, minf=1 00:19:59.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:59.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.539 issued rwts: total=2048,2458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.539 job1: (groupid=0, jobs=1): err= 0: pid=4136868: Fri Jul 12 17:30:38 2024 00:19:59.539 read: IOPS=6602, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:19:59.539 slat (nsec): min=1853, max=9120.1k, avg=70995.14, stdev=513644.76 00:19:59.539 clat (usec): min=3599, max=18914, avg=9330.96, stdev=2208.07 00:19:59.539 lat (usec): min=3612, max=18926, avg=9401.95, stdev=2235.21 00:19:59.539 clat percentiles (usec): 00:19:59.539 | 1.00th=[ 4686], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7767], 00:19:59.539 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 9110], 00:19:59.539 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[12387], 95.00th=[13566], 00:19:59.539 | 99.00th=[16319], 99.50th=[17433], 99.90th=[18482], 99.95th=[19006], 00:19:59.540 | 99.99th=[19006] 00:19:59.540 write: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1009msec); 0 zone resets 00:19:59.540 slat (usec): min=3, max=38219, avg=66.77, stdev=607.00 00:19:59.540 clat (usec): min=1085, max=43937, avg=9169.19, stdev=5074.57 00:19:59.540 lat (usec): min=1098, max=43948, avg=9235.97, stdev=5094.62 00:19:59.540 clat percentiles (usec): 00:19:59.540 | 1.00th=[ 3097], 5.00th=[ 4490], 10.00th=[ 5407], 20.00th=[ 6325], 00:19:59.540 | 30.00th=[ 7570], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:19:59.540 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[11731], 95.00th=[13304], 00:19:59.540 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:19:59.540 | 99.99th=[43779] 00:19:59.540 bw ( KiB/s): min=27704, max=28672, per=43.53%, avg=28188.00, stdev=684.48, samples=2 00:19:59.540 iops : min= 6926, max= 7168, avg=7047.00, stdev=171.12, samples=2 00:19:59.540 lat (msec) : 2=0.08%, 4=2.18%, 10=73.49%, 20=23.33%, 50=0.92% 00:19:59.540 cpu : usr=6.65%, sys=8.83%, ctx=607, majf=0, minf=1 00:19:59.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:59.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.540 issued rwts: total=6662,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.540 job2: (groupid=0, jobs=1): err= 0: pid=4136883: Fri Jul 12 17:30:38 2024 00:19:59.540 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:19:59.540 slat (nsec): min=1857, max=9043.8k, avg=146063.98, stdev=870250.61 00:19:59.540 clat (usec): min=9433, max=44825, avg=18472.01, stdev=6064.52 00:19:59.540 lat (usec): min=9441, max=47323, avg=18618.08, stdev=6142.88 00:19:59.540 clat percentiles (usec): 00:19:59.540 | 1.00th=[ 9896], 5.00th=[11338], 10.00th=[13566], 20.00th=[14353], 00:19:59.540 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15533], 60.00th=[17171], 00:19:59.540 | 70.00th=[19792], 80.00th=[25035], 90.00th=[27657], 95.00th=[29230], 00:19:59.540 | 99.00th=[36963], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:19:59.540 | 99.99th=[44827] 00:19:59.540 write: IOPS=3232, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1008msec); 0 zone resets 00:19:59.540 slat (usec): min=3, max=11539, avg=164.02, stdev=738.72 00:19:59.540 clat (usec): min=3681, max=56946, avg=21615.04, stdev=10775.70 00:19:59.540 lat (usec): min=8994, max=56953, avg=21779.06, stdev=10838.38 00:19:59.540 clat percentiles (usec): 00:19:59.540 | 1.00th=[10421], 5.00th=[11994], 10.00th=[13698], 20.00th=[15270], 00:19:59.540 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16319], 60.00th=[17433], 00:19:59.540 | 70.00th=[19530], 80.00th=[30540], 90.00th=[41681], 95.00th=[46400], 00:19:59.540 | 99.00th=[50594], 99.50th=[51643], 99.90th=[56886], 99.95th=[56886], 00:19:59.540 | 99.99th=[56886] 00:19:59.540 bw ( KiB/s): min= 8656, max=16384, per=19.33%, avg=12520.00, stdev=5464.52, samples=2 00:19:59.540 iops : min= 2164, max= 4096, avg=3130.00, stdev=1366.13, samples=2 00:19:59.540 lat (msec) : 4=0.02%, 10=1.14%, 20=69.92%, 50=28.03%, 100=0.90% 00:19:59.540 cpu : usr=2.18%, sys=4.27%, ctx=407, majf=0, minf=1 00:19:59.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:59.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.540 issued rwts: total=3072,3258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.540 job3: (groupid=0, jobs=1): err= 0: pid=4136888: Fri Jul 12 17:30:38 2024 00:19:59.540 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:19:59.540 slat (usec): min=2, max=11903, avg=137.37, stdev=931.22 00:19:59.540 clat (usec): min=7293, max=42327, avg=19121.61, stdev=6246.84 00:19:59.540 lat (usec): min=7300, max=42334, avg=19258.98, stdev=6308.11 00:19:59.540 clat percentiles (usec): 00:19:59.540 | 1.00th=[ 8586], 5.00th=[12780], 10.00th=[13435], 20.00th=[14091], 00:19:59.540 | 30.00th=[14877], 40.00th=[15795], 50.00th=[18482], 60.00th=[19268], 00:19:59.540 | 70.00th=[21627], 80.00th=[24249], 90.00th=[25822], 95.00th=[27919], 00:19:59.540 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:59.540 | 99.99th=[42206] 00:19:59.540 write: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1006msec); 0 zone resets 00:19:59.540 slat (usec): min=3, max=29071, avg=152.97, stdev=1099.74 00:19:59.540 clat (usec): min=1996, max=45189, avg=19759.35, stdev=7156.87 00:19:59.540 lat (usec): min=3069, max=45197, avg=19912.32, stdev=7250.63 00:19:59.540 clat percentiles (usec): 00:19:59.540 | 1.00th=[ 4883], 5.00th=[11469], 10.00th=[13304], 20.00th=[13960], 00:19:59.540 | 30.00th=[14746], 40.00th=[15533], 50.00th=[17433], 60.00th=[20317], 00:19:59.540 | 70.00th=[23462], 80.00th=[27132], 90.00th=[29754], 95.00th=[31589], 00:19:59.540 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:19:59.540 | 99.99th=[45351] 00:19:59.540 bw ( KiB/s): min=12288, max=14288, per=20.52%, avg=13288.00, stdev=1414.21, samples=2 00:19:59.540 iops : min= 3072, max= 3572, avg=3322.00, stdev=353.55, samples=2 00:19:59.540 lat (msec) : 2=0.02%, 4=0.28%, 10=3.33%, 20=55.98%, 50=40.40% 00:19:59.540 cpu : usr=2.29%, sys=4.98%, ctx=225, majf=0, minf=1 00:19:59.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:59.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.540 issued rwts: total=3072,3450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.540 00:19:59.540 Run status group 0 (all jobs): 00:19:59.540 READ: bw=57.5MiB/s (60.3MB/s), 8151KiB/s-25.8MiB/s (8347kB/s-27.0MB/s), io=58.0MiB (60.8MB), run=1005-1009msec 00:19:59.540 WRITE: bw=63.2MiB/s (66.3MB/s), 9783KiB/s-27.8MiB/s (10.0MB/s-29.1MB/s), io=63.8MiB (66.9MB), run=1005-1009msec 00:19:59.540 00:19:59.540 Disk stats (read/write): 00:19:59.540 nvme0n1: ios=1754/2048, merge=0/0, ticks=16682/19859, in_queue=36541, util=86.17% 00:19:59.540 nvme0n2: ios=5650/5895, merge=0/0, ticks=51780/48993, in_queue=100773, util=90.25% 00:19:59.540 nvme0n3: ios=2616/3071, merge=0/0, ticks=19601/24282, in_queue=43883, util=93.04% 00:19:59.540 nvme0n4: ios=2617/2958, merge=0/0, ticks=34983/40976, in_queue=75959, util=94.35% 00:19:59.540 17:30:38 -- target/fio.sh@55 -- # sync 00:19:59.540 17:30:38 -- target/fio.sh@59 -- # fio_pid=4137021 00:19:59.540 17:30:38 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:59.540 17:30:38 -- target/fio.sh@61 -- # sleep 3 00:19:59.540 [global] 00:19:59.540 thread=1 00:19:59.540 invalidate=1 00:19:59.540 rw=read 00:19:59.540 time_based=1 00:19:59.540 runtime=10 00:19:59.540 ioengine=libaio 00:19:59.540 direct=1 00:19:59.540 bs=4096 00:19:59.540 iodepth=1 00:19:59.540 norandommap=1 00:19:59.540 numjobs=1 00:19:59.540 00:19:59.540 [job0] 00:19:59.540 filename=/dev/nvme0n1 00:19:59.540 [job1] 00:19:59.540 filename=/dev/nvme0n2 00:19:59.540 [job2] 00:19:59.540 filename=/dev/nvme0n3 00:19:59.540 [job3] 00:19:59.540 filename=/dev/nvme0n4 00:19:59.540 Could not set queue depth (nvme0n1) 00:19:59.540 Could not set queue depth (nvme0n2) 00:19:59.540 Could not set queue depth (nvme0n3) 00:19:59.540 Could not set queue depth (nvme0n4) 00:19:59.798 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.798 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.798 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.798 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.798 fio-3.35 00:19:59.798 Starting 4 threads 00:20:02.323 17:30:41 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:02.581 17:30:41 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:02.581 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=274432, buflen=4096 00:20:02.581 fio: pid=4137370, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:02.839 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27308032, buflen=4096 00:20:02.839 fio: pid=4137362, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:02.839 17:30:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:02.839 17:30:41 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:02.839 17:30:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:02.839 17:30:41 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:02.839 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=13836288, buflen=4096 00:20:02.839 fio: pid=4137320, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.097 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=327680, buflen=4096 00:20:03.097 fio: pid=4137338, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.097 17:30:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.097 17:30:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:03.355 00:20:03.355 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4137320: Fri Jul 12 17:30:42 2024 00:20:03.355 read: IOPS=1108, BW=4435KiB/s (4541kB/s)(13.2MiB/3047msec) 00:20:03.355 slat (usec): min=3, max=16823, avg=12.44, stdev=289.29 00:20:03.355 clat (usec): min=245, max=41983, avg=881.60, stdev=4836.41 00:20:03.355 lat (usec): min=252, max=58007, avg=894.04, stdev=4888.07 00:20:03.355 clat percentiles (usec): 00:20:03.355 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:20:03.355 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:20:03.355 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 347], 00:20:03.355 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:20:03.355 | 99.99th=[42206] 00:20:03.355 bw ( KiB/s): min= 96, max=13184, per=43.58%, avg=5384.00, stdev=6879.28, samples=5 00:20:03.355 iops : min= 24, max= 3296, avg=1346.00, stdev=1719.82, samples=5 00:20:03.355 lat (usec) : 250=0.09%, 500=97.99%, 750=0.33%, 1000=0.12% 00:20:03.355 lat (msec) : 2=0.03%, 50=1.42% 00:20:03.355 cpu : usr=0.30%, sys=1.05%, ctx=3382, majf=0, minf=1 00:20:03.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.355 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.355 issued rwts: total=3379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.355 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4137338: Fri Jul 12 17:30:42 2024 00:20:03.355 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(320KiB/3300msec) 00:20:03.355 slat (usec): min=9, max=13691, avg=286.76, stdev=1731.23 00:20:03.355 clat (usec): min=644, max=48254, avg=40697.97, stdev=4618.17 00:20:03.355 lat (usec): min=752, max=55052, avg=40988.02, stdev=4962.36 00:20:03.355 clat percentiles (usec): 00:20:03.355 | 1.00th=[ 644], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:03.355 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:03.355 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:20:03.355 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:20:03.355 | 99.99th=[48497] 00:20:03.355 bw ( KiB/s): min= 96, max= 104, per=0.79%, avg=97.67, stdev= 3.20, samples=6 00:20:03.355 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:20:03.355 lat (usec) : 750=1.23% 00:20:03.355 lat (msec) : 50=97.53% 00:20:03.355 cpu : usr=0.00%, sys=0.09%, ctx=85, majf=0, minf=1 00:20:03.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.355 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.355 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.355 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4137362: Fri Jul 12 17:30:42 2024 00:20:03.355 read: IOPS=2336, BW=9344KiB/s (9568kB/s)(26.0MiB/2854msec) 00:20:03.355 slat (nsec): min=7282, max=55140, avg=8455.95, stdev=1660.21 00:20:03.355 clat (usec): min=220, max=42033, avg=413.97, stdev=2397.90 00:20:03.355 lat (usec): min=228, max=42053, avg=422.43, stdev=2398.63 00:20:03.355 clat percentiles (usec): 00:20:03.355 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 251], 00:20:03.355 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:20:03.355 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 314], 00:20:03.355 | 99.00th=[ 338], 99.50th=[ 408], 99.90th=[41157], 99.95th=[41681], 00:20:03.355 | 99.99th=[42206] 00:20:03.355 bw ( KiB/s): min= 4400, max=15232, per=84.32%, avg=10417.60, stdev=4969.11, samples=5 00:20:03.355 iops : min= 1100, max= 3808, avg=2604.40, stdev=1242.28, samples=5 00:20:03.355 lat (usec) : 250=19.75%, 500=79.81%, 750=0.01%, 1000=0.06% 00:20:03.355 lat (msec) : 50=0.34% 00:20:03.355 cpu : usr=1.16%, sys=3.96%, ctx=6670, majf=0, minf=1 00:20:03.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.355 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.355 issued rwts: total=6668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.355 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4137370: Fri Jul 12 17:30:42 2024 00:20:03.355 read: IOPS=25, BW=99.5KiB/s (102kB/s)(268KiB/2693msec) 00:20:03.355 slat (nsec): min=9028, max=31201, avg=16546.40, stdev=5692.28 00:20:03.355 clat (usec): min=250, max=42021, avg=39843.41, stdev=6989.63 00:20:03.355 lat (usec): min=262, max=42045, avg=39859.86, stdev=6988.67 00:20:03.355 clat percentiles (usec): 00:20:03.355 | 1.00th=[ 251], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:03.355 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:03.355 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:20:03.355 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:03.355 | 99.99th=[42206] 00:20:03.355 bw ( KiB/s): min= 96, max= 112, per=0.80%, avg=99.20, stdev= 7.16, samples=5 00:20:03.355 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:20:03.355 lat (usec) : 500=2.94% 00:20:03.355 lat (msec) : 50=95.59% 00:20:03.355 cpu : usr=0.00%, sys=0.07%, ctx=69, majf=0, minf=2 00:20:03.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.355 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.355 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.355 00:20:03.355 Run status group 0 (all jobs): 00:20:03.355 READ: bw=12.1MiB/s (12.7MB/s), 97.0KiB/s-9344KiB/s (99.3kB/s-9568kB/s), io=39.8MiB (41.7MB), run=2693-3300msec 00:20:03.355 00:20:03.355 Disk stats (read/write): 00:20:03.355 nvme0n1: ios=3405/0, merge=0/0, ticks=2971/0, in_queue=2971, util=99.27% 00:20:03.355 nvme0n2: ios=75/0, merge=0/0, ticks=3052/0, in_queue=3052, util=95.33% 00:20:03.355 nvme0n3: ios=6704/0, merge=0/0, ticks=3481/0, in_queue=3481, util=99.49% 00:20:03.355 nvme0n4: ios=94/0, merge=0/0, ticks=3061/0, in_queue=3061, util=99.36% 00:20:03.355 17:30:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.355 17:30:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:03.614 17:30:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.614 17:30:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:03.872 17:30:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.872 17:30:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:04.130 17:30:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:04.130 17:30:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:04.388 17:30:43 -- target/fio.sh@69 -- # fio_status=0 00:20:04.388 17:30:43 -- target/fio.sh@70 -- # wait 4137021 00:20:04.388 17:30:43 -- target/fio.sh@70 -- # fio_status=4 00:20:04.388 17:30:43 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:04.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:04.646 17:30:43 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:04.646 17:30:43 -- common/autotest_common.sh@1198 -- # local i=0 00:20:04.646 17:30:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:04.646 17:30:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:04.646 17:30:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:04.646 17:30:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:04.646 17:30:43 -- common/autotest_common.sh@1210 -- # return 0 00:20:04.646 17:30:43 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:04.646 17:30:43 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:04.646 nvmf hotplug test: fio failed as expected 00:20:04.646 17:30:43 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.905 17:30:43 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:04.905 17:30:43 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:04.905 17:30:43 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:04.905 17:30:43 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:04.905 17:30:43 -- target/fio.sh@91 -- # nvmftestfini 00:20:04.905 17:30:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:04.905 17:30:43 -- nvmf/common.sh@116 -- # sync 00:20:04.905 17:30:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:04.905 17:30:43 -- nvmf/common.sh@119 -- # set +e 00:20:04.905 17:30:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:04.905 17:30:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:04.905 rmmod nvme_tcp 00:20:04.905 rmmod nvme_fabrics 00:20:04.905 rmmod nvme_keyring 00:20:04.905 17:30:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:04.905 17:30:43 -- nvmf/common.sh@123 -- # set -e 00:20:04.905 17:30:43 -- nvmf/common.sh@124 -- # return 0 00:20:04.905 17:30:43 -- nvmf/common.sh@477 -- # '[' -n 4133909 ']' 00:20:04.905 17:30:43 -- nvmf/common.sh@478 -- # killprocess 4133909 00:20:04.905 17:30:43 -- common/autotest_common.sh@926 -- # '[' -z 4133909 ']' 00:20:04.905 17:30:43 -- common/autotest_common.sh@930 -- # kill -0 4133909 00:20:04.905 17:30:43 -- common/autotest_common.sh@931 -- # uname 00:20:04.905 17:30:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:04.905 17:30:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4133909 00:20:04.905 17:30:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:04.905 17:30:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:04.905 17:30:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4133909' 00:20:04.905 killing process with pid 4133909 00:20:04.905 17:30:43 -- common/autotest_common.sh@945 -- # kill 4133909 00:20:04.905 17:30:43 -- common/autotest_common.sh@950 -- # wait 4133909 00:20:05.163 17:30:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:05.163 17:30:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:05.163 17:30:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:05.163 17:30:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.163 17:30:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:05.163 17:30:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.163 17:30:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.163 17:30:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.693 17:30:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:07.693 00:20:07.693 real 0m28.040s 00:20:07.693 user 2m19.990s 00:20:07.693 sys 0m8.271s 00:20:07.693 17:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.693 17:30:46 -- common/autotest_common.sh@10 -- # set +x 00:20:07.693 ************************************ 00:20:07.693 END TEST nvmf_fio_target 00:20:07.693 ************************************ 00:20:07.693 17:30:46 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:07.693 17:30:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:07.693 17:30:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:07.693 17:30:46 -- common/autotest_common.sh@10 -- # set +x 00:20:07.693 ************************************ 00:20:07.693 START TEST nvmf_bdevio 00:20:07.693 ************************************ 00:20:07.693 17:30:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:07.693 * Looking for test storage... 00:20:07.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.693 17:30:46 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.693 17:30:46 -- nvmf/common.sh@7 -- # uname -s 00:20:07.693 17:30:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.694 17:30:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.694 17:30:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.694 17:30:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.694 17:30:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.694 17:30:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.694 17:30:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.694 17:30:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.694 17:30:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.694 17:30:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.694 17:30:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:07.694 17:30:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:07.694 17:30:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.694 17:30:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.694 17:30:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.694 17:30:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.694 17:30:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.694 17:30:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.694 17:30:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.694 17:30:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.694 17:30:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.694 17:30:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.694 17:30:46 -- paths/export.sh@5 -- # export PATH 00:20:07.694 17:30:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.694 17:30:46 -- nvmf/common.sh@46 -- # : 0 00:20:07.694 17:30:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:07.694 17:30:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:07.694 17:30:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:07.694 17:30:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.694 17:30:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.694 17:30:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:07.694 17:30:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:07.694 17:30:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:07.694 17:30:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.694 17:30:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.694 17:30:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:07.694 17:30:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:07.694 17:30:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.694 17:30:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:07.694 17:30:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:07.694 17:30:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:07.694 17:30:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.694 17:30:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.694 17:30:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.694 17:30:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:07.694 17:30:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:07.694 17:30:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:07.694 17:30:46 -- common/autotest_common.sh@10 -- # set +x 00:20:12.956 17:30:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:12.956 17:30:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:12.956 17:30:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:12.956 17:30:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:12.956 17:30:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:12.956 17:30:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:12.956 17:30:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:12.956 17:30:51 -- nvmf/common.sh@294 -- # net_devs=() 00:20:12.956 17:30:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:12.956 17:30:51 -- nvmf/common.sh@295 -- # e810=() 00:20:12.956 17:30:51 -- nvmf/common.sh@295 -- # local -ga e810 00:20:12.956 17:30:51 -- nvmf/common.sh@296 -- # x722=() 00:20:12.956 17:30:51 -- nvmf/common.sh@296 -- # local -ga x722 00:20:12.956 17:30:51 -- nvmf/common.sh@297 -- # mlx=() 00:20:12.956 17:30:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:12.956 17:30:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.956 17:30:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:12.956 17:30:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:12.956 17:30:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:12.956 17:30:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:12.956 17:30:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:12.956 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:12.956 17:30:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:12.956 17:30:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:12.956 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:12.956 17:30:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:12.956 17:30:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:12.956 17:30:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.956 17:30:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:12.956 17:30:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.956 17:30:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:12.956 Found net devices under 0000:af:00.0: cvl_0_0 00:20:12.956 17:30:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.956 17:30:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:12.956 17:30:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.956 17:30:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:12.956 17:30:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.956 17:30:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:12.956 Found net devices under 0000:af:00.1: cvl_0_1 00:20:12.956 17:30:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.956 17:30:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:12.956 17:30:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:12.956 17:30:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:12.956 17:30:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:12.956 17:30:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.956 17:30:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.956 17:30:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.956 17:30:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:12.956 17:30:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.956 17:30:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.956 17:30:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:12.956 17:30:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.957 17:30:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.957 17:30:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:12.957 17:30:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:12.957 17:30:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.957 17:30:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.957 17:30:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.957 17:30:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.957 17:30:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:12.957 17:30:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.957 17:30:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.957 17:30:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.957 17:30:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:12.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:20:12.957 00:20:12.957 --- 10.0.0.2 ping statistics --- 00:20:12.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.957 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:12.957 17:30:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:20:12.957 00:20:12.957 --- 10.0.0.1 ping statistics --- 00:20:12.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.957 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:20:12.957 17:30:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.957 17:30:51 -- nvmf/common.sh@410 -- # return 0 00:20:12.957 17:30:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:12.957 17:30:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.957 17:30:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:12.957 17:30:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:12.957 17:30:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.957 17:30:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:12.957 17:30:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:12.957 17:30:51 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:12.957 17:30:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:12.957 17:30:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:12.957 17:30:51 -- common/autotest_common.sh@10 -- # set +x 00:20:12.957 17:30:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:12.957 17:30:51 -- nvmf/common.sh@469 -- # nvmfpid=4141840 00:20:12.957 17:30:51 -- nvmf/common.sh@470 -- # waitforlisten 4141840 00:20:12.957 17:30:51 -- common/autotest_common.sh@819 -- # '[' -z 4141840 ']' 00:20:12.957 17:30:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.957 17:30:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:12.957 17:30:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.957 17:30:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:12.957 17:30:51 -- common/autotest_common.sh@10 -- # set +x 00:20:12.957 [2024-07-12 17:30:51.914421] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:12.957 [2024-07-12 17:30:51.914478] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.216 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.216 [2024-07-12 17:30:51.992181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.216 [2024-07-12 17:30:52.035221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:13.216 [2024-07-12 17:30:52.035374] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.216 [2024-07-12 17:30:52.035387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.216 [2024-07-12 17:30:52.035396] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.216 [2024-07-12 17:30:52.035518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:13.216 [2024-07-12 17:30:52.035630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:13.216 [2024-07-12 17:30:52.035736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:13.216 [2024-07-12 17:30:52.035737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.152 17:30:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.152 17:30:52 -- common/autotest_common.sh@852 -- # return 0 00:20:14.152 17:30:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.152 17:30:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:14.152 17:30:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.152 17:30:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.152 17:30:52 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.152 17:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.152 17:30:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.152 [2024-07-12 17:30:52.816165] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.152 17:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.152 17:30:52 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.152 17:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.152 17:30:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.152 Malloc0 00:20:14.152 17:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.152 17:30:52 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.152 17:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.152 17:30:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.152 17:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.152 17:30:52 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.152 17:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.152 17:30:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.152 17:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.152 17:30:52 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.152 17:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.152 17:30:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.152 [2024-07-12 17:30:52.872461] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.152 17:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.152 17:30:52 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:14.152 17:30:52 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:14.152 17:30:52 -- nvmf/common.sh@520 -- # config=() 00:20:14.152 17:30:52 -- nvmf/common.sh@520 -- # local subsystem config 00:20:14.152 17:30:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:14.152 17:30:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:14.152 { 00:20:14.152 "params": { 00:20:14.152 "name": "Nvme$subsystem", 00:20:14.152 "trtype": "$TEST_TRANSPORT", 00:20:14.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.152 "adrfam": "ipv4", 00:20:14.152 "trsvcid": "$NVMF_PORT", 00:20:14.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.152 "hdgst": ${hdgst:-false}, 00:20:14.152 "ddgst": ${ddgst:-false} 00:20:14.152 }, 00:20:14.152 "method": "bdev_nvme_attach_controller" 00:20:14.152 } 00:20:14.152 EOF 00:20:14.152 )") 00:20:14.152 17:30:52 -- nvmf/common.sh@542 -- # cat 00:20:14.152 17:30:52 -- nvmf/common.sh@544 -- # jq . 00:20:14.152 17:30:52 -- nvmf/common.sh@545 -- # IFS=, 00:20:14.152 17:30:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:14.152 "params": { 00:20:14.152 "name": "Nvme1", 00:20:14.152 "trtype": "tcp", 00:20:14.152 "traddr": "10.0.0.2", 00:20:14.152 "adrfam": "ipv4", 00:20:14.152 "trsvcid": "4420", 00:20:14.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.152 "hdgst": false, 00:20:14.152 "ddgst": false 00:20:14.152 }, 00:20:14.152 "method": "bdev_nvme_attach_controller" 00:20:14.152 }' 00:20:14.152 [2024-07-12 17:30:52.920763] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:14.152 [2024-07-12 17:30:52.920807] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142025 ] 00:20:14.152 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.152 [2024-07-12 17:30:52.991777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.152 [2024-07-12 17:30:53.034182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.152 [2024-07-12 17:30:53.034297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.152 [2024-07-12 17:30:53.034301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.410 [2024-07-12 17:30:53.338879] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:14.410 [2024-07-12 17:30:53.338917] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:14.410 I/O targets: 00:20:14.410 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:14.410 00:20:14.410 00:20:14.410 CUnit - A unit testing framework for C - Version 2.1-3 00:20:14.410 http://cunit.sourceforge.net/ 00:20:14.410 00:20:14.410 00:20:14.410 Suite: bdevio tests on: Nvme1n1 00:20:14.668 Test: blockdev write read block ...passed 00:20:14.668 Test: blockdev write zeroes read block ...passed 00:20:14.668 Test: blockdev write zeroes read no split ...passed 00:20:14.668 Test: blockdev write zeroes read split ...passed 00:20:14.668 Test: blockdev write zeroes read split partial ...passed 00:20:14.668 Test: blockdev reset ...[2024-07-12 17:30:53.456801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:14.668 [2024-07-12 17:30:53.456861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d7160 (9): Bad file descriptor 00:20:14.668 [2024-07-12 17:30:53.470805] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:14.668 passed 00:20:14.668 Test: blockdev write read 8 blocks ...passed 00:20:14.668 Test: blockdev write read size > 128k ...passed 00:20:14.668 Test: blockdev write read invalid size ...passed 00:20:14.668 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:14.668 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:14.668 Test: blockdev write read max offset ...passed 00:20:14.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:14.927 Test: blockdev writev readv 8 blocks ...passed 00:20:14.927 Test: blockdev writev readv 30 x 1block ...passed 00:20:14.927 Test: blockdev writev readv block ...passed 00:20:14.927 Test: blockdev writev readv size > 128k ...passed 00:20:14.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:14.927 Test: blockdev comparev and writev ...[2024-07-12 17:30:53.768376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.927 [2024-07-12 17:30:53.768403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.768415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.927 [2024-07-12 17:30:53.768422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.769034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.927 [2024-07-12 17:30:53.769047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.769058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.927 [2024-07-12 17:30:53.769064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.769668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.927 [2024-07-12 17:30:53.769677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.769687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.927 [2024-07-12 17:30:53.769694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.770311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.927 [2024-07-12 17:30:53.770320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.770330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:14.927 [2024-07-12 17:30:53.770337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:14.927 passed 00:20:14.927 Test: blockdev nvme passthru rw ...passed 00:20:14.927 Test: blockdev nvme passthru vendor specific ...[2024-07-12 17:30:53.853697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.927 [2024-07-12 17:30:53.853723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.853908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.927 [2024-07-12 17:30:53.853916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.854097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.927 [2024-07-12 17:30:53.854105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:14.927 [2024-07-12 17:30:53.854307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:14.927 [2024-07-12 17:30:53.854317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:14.927 passed 00:20:14.927 Test: blockdev nvme admin passthru ...passed 00:20:15.186 Test: blockdev copy ...passed 00:20:15.186 00:20:15.186 Run Summary: Type Total Ran Passed Failed Inactive 00:20:15.186 suites 1 1 n/a 0 0 00:20:15.186 tests 23 23 23 0 0 00:20:15.186 asserts 152 152 152 0 n/a 00:20:15.186 00:20:15.186 Elapsed time = 1.142 seconds 00:20:15.186 17:30:54 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.186 17:30:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.186 17:30:54 -- common/autotest_common.sh@10 -- # set +x 00:20:15.186 17:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.186 17:30:54 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:15.186 17:30:54 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:15.186 17:30:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:15.186 17:30:54 -- nvmf/common.sh@116 -- # sync 00:20:15.186 17:30:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:15.186 17:30:54 -- nvmf/common.sh@119 -- # set +e 00:20:15.186 17:30:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:15.186 17:30:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:15.186 rmmod nvme_tcp 00:20:15.186 rmmod nvme_fabrics 00:20:15.186 rmmod nvme_keyring 00:20:15.186 17:30:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:15.444 17:30:54 -- nvmf/common.sh@123 -- # set -e 00:20:15.444 17:30:54 -- nvmf/common.sh@124 -- # return 0 00:20:15.444 17:30:54 -- nvmf/common.sh@477 -- # '[' -n 4141840 ']' 00:20:15.444 17:30:54 -- nvmf/common.sh@478 -- # killprocess 4141840 00:20:15.444 17:30:54 -- common/autotest_common.sh@926 -- # '[' -z 4141840 ']' 00:20:15.444 17:30:54 -- common/autotest_common.sh@930 -- # kill -0 4141840 00:20:15.444 17:30:54 -- common/autotest_common.sh@931 -- # uname 00:20:15.444 17:30:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:15.444 17:30:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4141840 00:20:15.444 17:30:54 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:15.444 17:30:54 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:15.444 17:30:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4141840' 00:20:15.444 killing process with pid 4141840 00:20:15.444 17:30:54 -- common/autotest_common.sh@945 -- # kill 4141840 00:20:15.444 17:30:54 -- common/autotest_common.sh@950 -- # wait 4141840 00:20:15.703 17:30:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:15.703 17:30:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:15.703 17:30:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:15.703 17:30:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.703 17:30:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:15.703 17:30:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.703 17:30:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.703 17:30:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.606 17:30:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:17.606 00:20:17.606 real 0m10.373s 00:20:17.606 user 0m13.516s 00:20:17.606 sys 0m4.760s 00:20:17.606 17:30:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.606 17:30:56 -- common/autotest_common.sh@10 -- # set +x 00:20:17.606 ************************************ 00:20:17.606 END TEST nvmf_bdevio 00:20:17.606 ************************************ 00:20:17.606 17:30:56 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:20:17.606 17:30:56 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:17.606 17:30:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:17.606 17:30:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:17.606 17:30:56 -- common/autotest_common.sh@10 -- # set +x 00:20:17.606 ************************************ 00:20:17.606 START TEST nvmf_bdevio_no_huge 00:20:17.606 ************************************ 00:20:17.606 17:30:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:17.865 * Looking for test storage... 00:20:17.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:17.865 17:30:56 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.865 17:30:56 -- nvmf/common.sh@7 -- # uname -s 00:20:17.865 17:30:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.865 17:30:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.865 17:30:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.865 17:30:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.865 17:30:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.865 17:30:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.865 17:30:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.865 17:30:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.865 17:30:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.865 17:30:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.866 17:30:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:17.866 17:30:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:17.866 17:30:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.866 17:30:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.866 17:30:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.866 17:30:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.866 17:30:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.866 17:30:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.866 17:30:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.866 17:30:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.866 17:30:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.866 17:30:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.866 17:30:56 -- paths/export.sh@5 -- # export PATH 00:20:17.866 17:30:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.866 17:30:56 -- nvmf/common.sh@46 -- # : 0 00:20:17.866 17:30:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:17.866 17:30:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:17.866 17:30:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:17.866 17:30:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.866 17:30:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.866 17:30:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:17.866 17:30:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:17.866 17:30:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:17.866 17:30:56 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:17.866 17:30:56 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:17.866 17:30:56 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:17.866 17:30:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:17.866 17:30:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.866 17:30:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:17.866 17:30:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:17.866 17:30:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:17.866 17:30:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.866 17:30:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.866 17:30:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.866 17:30:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:17.866 17:30:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:17.866 17:30:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:17.866 17:30:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.128 17:31:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:23.128 17:31:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:23.128 17:31:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:23.128 17:31:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:23.128 17:31:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:23.128 17:31:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:23.128 17:31:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:23.128 17:31:01 -- nvmf/common.sh@294 -- # net_devs=() 00:20:23.128 17:31:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:23.128 17:31:01 -- nvmf/common.sh@295 -- # e810=() 00:20:23.128 17:31:01 -- nvmf/common.sh@295 -- # local -ga e810 00:20:23.128 17:31:01 -- nvmf/common.sh@296 -- # x722=() 00:20:23.128 17:31:01 -- nvmf/common.sh@296 -- # local -ga x722 00:20:23.128 17:31:01 -- nvmf/common.sh@297 -- # mlx=() 00:20:23.128 17:31:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:23.128 17:31:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.128 17:31:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:23.128 17:31:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:23.128 17:31:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:23.128 17:31:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:23.128 17:31:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:23.128 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:23.128 17:31:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:23.128 17:31:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:23.128 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:23.128 17:31:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:23.128 17:31:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:23.128 17:31:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.128 17:31:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:23.128 17:31:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.128 17:31:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:23.128 Found net devices under 0000:af:00.0: cvl_0_0 00:20:23.128 17:31:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.128 17:31:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:23.128 17:31:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.128 17:31:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:23.128 17:31:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.128 17:31:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:23.128 Found net devices under 0000:af:00.1: cvl_0_1 00:20:23.128 17:31:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.128 17:31:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:23.128 17:31:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:23.128 17:31:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:23.128 17:31:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:23.128 17:31:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.128 17:31:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.128 17:31:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.128 17:31:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:23.128 17:31:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.128 17:31:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.128 17:31:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:23.128 17:31:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.128 17:31:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.128 17:31:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:23.128 17:31:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:23.128 17:31:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.128 17:31:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.128 17:31:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.128 17:31:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.128 17:31:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:23.128 17:31:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.128 17:31:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.128 17:31:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.128 17:31:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:23.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:20:23.128 00:20:23.128 --- 10.0.0.2 ping statistics --- 00:20:23.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.128 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:23.128 17:31:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:20:23.128 00:20:23.128 --- 10.0.0.1 ping statistics --- 00:20:23.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.128 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:23.128 17:31:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.128 17:31:02 -- nvmf/common.sh@410 -- # return 0 00:20:23.128 17:31:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:23.128 17:31:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.128 17:31:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:23.128 17:31:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:23.128 17:31:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.128 17:31:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:23.128 17:31:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:23.128 17:31:02 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:23.128 17:31:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:23.128 17:31:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:23.128 17:31:02 -- common/autotest_common.sh@10 -- # set +x 00:20:23.128 17:31:02 -- nvmf/common.sh@469 -- # nvmfpid=4145768 00:20:23.128 17:31:02 -- nvmf/common.sh@470 -- # waitforlisten 4145768 00:20:23.128 17:31:02 -- common/autotest_common.sh@819 -- # '[' -z 4145768 ']' 00:20:23.128 17:31:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.128 17:31:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:23.128 17:31:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.128 17:31:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:23.128 17:31:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:23.128 17:31:02 -- common/autotest_common.sh@10 -- # set +x 00:20:23.387 [2024-07-12 17:31:02.133472] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:23.387 [2024-07-12 17:31:02.133531] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:23.387 [2024-07-12 17:31:02.214837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:23.387 [2024-07-12 17:31:02.298123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:23.387 [2024-07-12 17:31:02.298279] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.387 [2024-07-12 17:31:02.298291] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.387 [2024-07-12 17:31:02.298300] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.387 [2024-07-12 17:31:02.298429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:23.387 [2024-07-12 17:31:02.298549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.387 [2024-07-12 17:31:02.298549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:23.387 [2024-07-12 17:31:02.298460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:24.324 17:31:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:24.324 17:31:02 -- common/autotest_common.sh@852 -- # return 0 00:20:24.324 17:31:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:24.324 17:31:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:24.324 17:31:02 -- common/autotest_common.sh@10 -- # set +x 00:20:24.324 17:31:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.324 17:31:03 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.324 17:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.324 17:31:03 -- common/autotest_common.sh@10 -- # set +x 00:20:24.324 [2024-07-12 17:31:03.031218] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.324 17:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.324 17:31:03 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:24.324 17:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.324 17:31:03 -- common/autotest_common.sh@10 -- # set +x 00:20:24.324 Malloc0 00:20:24.324 17:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.324 17:31:03 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.324 17:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.324 17:31:03 -- common/autotest_common.sh@10 -- # set +x 00:20:24.324 17:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.324 17:31:03 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:24.324 17:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.324 17:31:03 -- common/autotest_common.sh@10 -- # set +x 00:20:24.325 17:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.325 17:31:03 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.325 17:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.325 17:31:03 -- common/autotest_common.sh@10 -- # set +x 00:20:24.325 [2024-07-12 17:31:03.075827] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.325 17:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.325 17:31:03 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:24.325 17:31:03 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:24.325 17:31:03 -- nvmf/common.sh@520 -- # config=() 00:20:24.325 17:31:03 -- nvmf/common.sh@520 -- # local subsystem config 00:20:24.325 17:31:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:24.325 17:31:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:24.325 { 00:20:24.325 "params": { 00:20:24.325 "name": "Nvme$subsystem", 00:20:24.325 "trtype": "$TEST_TRANSPORT", 00:20:24.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.325 "adrfam": "ipv4", 00:20:24.325 "trsvcid": "$NVMF_PORT", 00:20:24.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.325 "hdgst": ${hdgst:-false}, 00:20:24.325 "ddgst": ${ddgst:-false} 00:20:24.325 }, 00:20:24.325 "method": "bdev_nvme_attach_controller" 00:20:24.325 } 00:20:24.325 EOF 00:20:24.325 )") 00:20:24.325 17:31:03 -- nvmf/common.sh@542 -- # cat 00:20:24.325 17:31:03 -- nvmf/common.sh@544 -- # jq . 00:20:24.325 17:31:03 -- nvmf/common.sh@545 -- # IFS=, 00:20:24.325 17:31:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:24.325 "params": { 00:20:24.325 "name": "Nvme1", 00:20:24.325 "trtype": "tcp", 00:20:24.325 "traddr": "10.0.0.2", 00:20:24.325 "adrfam": "ipv4", 00:20:24.325 "trsvcid": "4420", 00:20:24.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.325 "hdgst": false, 00:20:24.325 "ddgst": false 00:20:24.325 }, 00:20:24.325 "method": "bdev_nvme_attach_controller" 00:20:24.325 }' 00:20:24.325 [2024-07-12 17:31:03.122113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:24.325 [2024-07-12 17:31:03.122155] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4146052 ] 00:20:24.325 [2024-07-12 17:31:03.188237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:24.325 [2024-07-12 17:31:03.272577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.325 [2024-07-12 17:31:03.272680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.325 [2024-07-12 17:31:03.272680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.892 [2024-07-12 17:31:03.588156] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:24.892 [2024-07-12 17:31:03.588192] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:24.892 I/O targets: 00:20:24.892 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:24.892 00:20:24.892 00:20:24.892 CUnit - A unit testing framework for C - Version 2.1-3 00:20:24.892 http://cunit.sourceforge.net/ 00:20:24.892 00:20:24.892 00:20:24.892 Suite: bdevio tests on: Nvme1n1 00:20:24.892 Test: blockdev write read block ...passed 00:20:24.892 Test: blockdev write zeroes read block ...passed 00:20:24.892 Test: blockdev write zeroes read no split ...passed 00:20:24.892 Test: blockdev write zeroes read split ...passed 00:20:24.893 Test: blockdev write zeroes read split partial ...passed 00:20:24.893 Test: blockdev reset ...[2024-07-12 17:31:03.794635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:24.893 [2024-07-12 17:31:03.794699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7bfd0 (9): Bad file descriptor 00:20:24.893 [2024-07-12 17:31:03.847959] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:24.893 passed 00:20:25.151 Test: blockdev write read 8 blocks ...passed 00:20:25.151 Test: blockdev write read size > 128k ...passed 00:20:25.151 Test: blockdev write read invalid size ...passed 00:20:25.151 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:25.151 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:25.151 Test: blockdev write read max offset ...passed 00:20:25.152 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:25.152 Test: blockdev writev readv 8 blocks ...passed 00:20:25.152 Test: blockdev writev readv 30 x 1block ...passed 00:20:25.152 Test: blockdev writev readv block ...passed 00:20:25.411 Test: blockdev writev readv size > 128k ...passed 00:20:25.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:25.411 Test: blockdev comparev and writev ...[2024-07-12 17:31:04.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.411 [2024-07-12 17:31:04.143515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.143527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.411 [2024-07-12 17:31:04.143534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.144021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.411 [2024-07-12 17:31:04.144029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.144039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.411 [2024-07-12 17:31:04.144045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.144558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.411 [2024-07-12 17:31:04.144568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.144578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.411 [2024-07-12 17:31:04.144584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.145045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.411 [2024-07-12 17:31:04.145054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.145064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.411 [2024-07-12 17:31:04.145070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.411 passed 00:20:25.411 Test: blockdev nvme passthru rw ...passed 00:20:25.411 Test: blockdev nvme passthru vendor specific ...[2024-07-12 17:31:04.228723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.411 [2024-07-12 17:31:04.228739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.228924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.411 [2024-07-12 17:31:04.228932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.229122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.411 [2024-07-12 17:31:04.229131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.411 [2024-07-12 17:31:04.229331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.411 [2024-07-12 17:31:04.229340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.411 passed 00:20:25.411 Test: blockdev nvme admin passthru ...passed 00:20:25.411 Test: blockdev copy ...passed 00:20:25.411 00:20:25.411 Run Summary: Type Total Ran Passed Failed Inactive 00:20:25.411 suites 1 1 n/a 0 0 00:20:25.411 tests 23 23 23 0 0 00:20:25.411 asserts 152 152 152 0 n/a 00:20:25.411 00:20:25.411 Elapsed time = 1.396 seconds 00:20:25.671 17:31:04 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.671 17:31:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.671 17:31:04 -- common/autotest_common.sh@10 -- # set +x 00:20:25.671 17:31:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.671 17:31:04 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:25.671 17:31:04 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:25.671 17:31:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:25.671 17:31:04 -- nvmf/common.sh@116 -- # sync 00:20:25.671 17:31:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:25.671 17:31:04 -- nvmf/common.sh@119 -- # set +e 00:20:25.671 17:31:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:25.671 17:31:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:25.930 rmmod nvme_tcp 00:20:25.930 rmmod nvme_fabrics 00:20:25.930 rmmod nvme_keyring 00:20:25.930 17:31:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:25.930 17:31:04 -- nvmf/common.sh@123 -- # set -e 00:20:25.930 17:31:04 -- nvmf/common.sh@124 -- # return 0 00:20:25.930 17:31:04 -- nvmf/common.sh@477 -- # '[' -n 4145768 ']' 00:20:25.930 17:31:04 -- nvmf/common.sh@478 -- # killprocess 4145768 00:20:25.930 17:31:04 -- common/autotest_common.sh@926 -- # '[' -z 4145768 ']' 00:20:25.930 17:31:04 -- common/autotest_common.sh@930 -- # kill -0 4145768 00:20:25.930 17:31:04 -- common/autotest_common.sh@931 -- # uname 00:20:25.930 17:31:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:25.930 17:31:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4145768 00:20:25.930 17:31:04 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:25.930 17:31:04 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:25.930 17:31:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4145768' 00:20:25.930 killing process with pid 4145768 00:20:25.930 17:31:04 -- common/autotest_common.sh@945 -- # kill 4145768 00:20:25.930 17:31:04 -- common/autotest_common.sh@950 -- # wait 4145768 00:20:26.190 17:31:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:26.190 17:31:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:26.190 17:31:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:26.190 17:31:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.190 17:31:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:26.190 17:31:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.190 17:31:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.190 17:31:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.728 17:31:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:28.728 00:20:28.728 real 0m10.646s 00:20:28.728 user 0m15.201s 00:20:28.728 sys 0m5.146s 00:20:28.728 17:31:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.728 17:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:28.728 ************************************ 00:20:28.728 END TEST nvmf_bdevio_no_huge 00:20:28.728 ************************************ 00:20:28.728 17:31:07 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:28.728 17:31:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:28.728 17:31:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:28.728 17:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:28.728 ************************************ 00:20:28.728 START TEST nvmf_tls 00:20:28.728 ************************************ 00:20:28.728 17:31:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:28.728 * Looking for test storage... 00:20:28.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.728 17:31:07 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.728 17:31:07 -- nvmf/common.sh@7 -- # uname -s 00:20:28.728 17:31:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.728 17:31:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.728 17:31:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.728 17:31:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.728 17:31:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.728 17:31:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.728 17:31:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.728 17:31:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.728 17:31:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.728 17:31:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.728 17:31:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:28.728 17:31:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:28.728 17:31:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.728 17:31:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.728 17:31:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.728 17:31:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.728 17:31:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.728 17:31:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.728 17:31:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.728 17:31:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.729 17:31:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.729 17:31:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.729 17:31:07 -- paths/export.sh@5 -- # export PATH 00:20:28.729 17:31:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.729 17:31:07 -- nvmf/common.sh@46 -- # : 0 00:20:28.729 17:31:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:28.729 17:31:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:28.729 17:31:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:28.729 17:31:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.729 17:31:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.729 17:31:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:28.729 17:31:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:28.729 17:31:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:28.729 17:31:07 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:28.729 17:31:07 -- target/tls.sh@71 -- # nvmftestinit 00:20:28.729 17:31:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:28.729 17:31:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.729 17:31:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:28.729 17:31:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:28.729 17:31:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:28.729 17:31:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.729 17:31:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.729 17:31:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.729 17:31:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:28.729 17:31:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:28.729 17:31:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:28.729 17:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:34.002 17:31:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.002 17:31:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:34.002 17:31:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:34.002 17:31:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:34.002 17:31:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:34.002 17:31:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:34.002 17:31:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:34.002 17:31:12 -- nvmf/common.sh@294 -- # net_devs=() 00:20:34.002 17:31:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:34.002 17:31:12 -- nvmf/common.sh@295 -- # e810=() 00:20:34.002 17:31:12 -- nvmf/common.sh@295 -- # local -ga e810 00:20:34.002 17:31:12 -- nvmf/common.sh@296 -- # x722=() 00:20:34.003 17:31:12 -- nvmf/common.sh@296 -- # local -ga x722 00:20:34.003 17:31:12 -- nvmf/common.sh@297 -- # mlx=() 00:20:34.003 17:31:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:34.003 17:31:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.003 17:31:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:34.003 17:31:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:34.003 17:31:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:34.003 17:31:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:34.003 17:31:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:34.003 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:34.003 17:31:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:34.003 17:31:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:34.003 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:34.003 17:31:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:34.003 17:31:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:34.003 17:31:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.003 17:31:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:34.003 17:31:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.003 17:31:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:34.003 Found net devices under 0000:af:00.0: cvl_0_0 00:20:34.003 17:31:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.003 17:31:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:34.003 17:31:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.003 17:31:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:34.003 17:31:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.003 17:31:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:34.003 Found net devices under 0000:af:00.1: cvl_0_1 00:20:34.003 17:31:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.003 17:31:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:34.003 17:31:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:34.003 17:31:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:34.003 17:31:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.003 17:31:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.003 17:31:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.003 17:31:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:34.003 17:31:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.003 17:31:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.003 17:31:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:34.003 17:31:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.003 17:31:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.003 17:31:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:34.003 17:31:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:34.003 17:31:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.003 17:31:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.003 17:31:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.003 17:31:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.003 17:31:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:34.003 17:31:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.003 17:31:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.003 17:31:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.003 17:31:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:34.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:20:34.003 00:20:34.003 --- 10.0.0.2 ping statistics --- 00:20:34.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.003 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:20:34.003 17:31:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:20:34.003 00:20:34.003 --- 10.0.0.1 ping statistics --- 00:20:34.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.003 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:20:34.003 17:31:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.003 17:31:12 -- nvmf/common.sh@410 -- # return 0 00:20:34.003 17:31:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:34.003 17:31:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.003 17:31:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:34.003 17:31:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.003 17:31:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:34.003 17:31:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:34.003 17:31:12 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:34.003 17:31:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:34.003 17:31:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:34.003 17:31:12 -- common/autotest_common.sh@10 -- # set +x 00:20:34.003 17:31:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:34.003 17:31:12 -- nvmf/common.sh@469 -- # nvmfpid=4149885 00:20:34.003 17:31:12 -- nvmf/common.sh@470 -- # waitforlisten 4149885 00:20:34.003 17:31:12 -- common/autotest_common.sh@819 -- # '[' -z 4149885 ']' 00:20:34.003 17:31:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.003 17:31:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:34.003 17:31:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.003 17:31:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:34.003 17:31:12 -- common/autotest_common.sh@10 -- # set +x 00:20:34.003 [2024-07-12 17:31:12.748656] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:34.003 [2024-07-12 17:31:12.748713] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.003 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.003 [2024-07-12 17:31:12.827703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.003 [2024-07-12 17:31:12.869976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:34.003 [2024-07-12 17:31:12.870117] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.003 [2024-07-12 17:31:12.870127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.003 [2024-07-12 17:31:12.870137] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.003 [2024-07-12 17:31:12.870159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.003 17:31:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:34.003 17:31:12 -- common/autotest_common.sh@852 -- # return 0 00:20:34.003 17:31:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:34.003 17:31:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:34.003 17:31:12 -- common/autotest_common.sh@10 -- # set +x 00:20:34.261 17:31:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.261 17:31:12 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:20:34.261 17:31:12 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:34.261 true 00:20:34.261 17:31:13 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.261 17:31:13 -- target/tls.sh@82 -- # jq -r .tls_version 00:20:34.844 17:31:13 -- target/tls.sh@82 -- # version=0 00:20:34.844 17:31:13 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:20:34.844 17:31:13 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:34.844 17:31:13 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.844 17:31:13 -- target/tls.sh@90 -- # jq -r .tls_version 00:20:35.154 17:31:13 -- target/tls.sh@90 -- # version=13 00:20:35.154 17:31:13 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:20:35.154 17:31:13 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:35.743 17:31:14 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.743 17:31:14 -- target/tls.sh@98 -- # jq -r .tls_version 00:20:36.002 17:31:14 -- target/tls.sh@98 -- # version=7 00:20:36.002 17:31:14 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:20:36.002 17:31:14 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.002 17:31:14 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:36.261 17:31:15 -- target/tls.sh@105 -- # ktls=false 00:20:36.261 17:31:15 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:20:36.261 17:31:15 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:36.519 17:31:15 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.519 17:31:15 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:36.777 17:31:15 -- target/tls.sh@113 -- # ktls=true 00:20:36.777 17:31:15 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:20:36.777 17:31:15 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:37.037 17:31:15 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:37.037 17:31:15 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:20:37.605 17:31:16 -- target/tls.sh@121 -- # ktls=false 00:20:37.605 17:31:16 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:20:37.605 17:31:16 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:20:37.605 17:31:16 -- target/tls.sh@49 -- # local key hash crc 00:20:37.605 17:31:16 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:20:37.605 17:31:16 -- target/tls.sh@51 -- # hash=01 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # gzip -1 -c 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # tail -c8 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # head -c 4 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # crc='p$H�' 00:20:37.605 17:31:16 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:37.605 17:31:16 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:20:37.605 17:31:16 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:37.605 17:31:16 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:37.605 17:31:16 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:20:37.605 17:31:16 -- target/tls.sh@49 -- # local key hash crc 00:20:37.605 17:31:16 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:20:37.605 17:31:16 -- target/tls.sh@51 -- # hash=01 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # gzip -1 -c 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # tail -c8 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # head -c 4 00:20:37.605 17:31:16 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:20:37.605 17:31:16 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:37.605 17:31:16 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:20:37.605 17:31:16 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:37.605 17:31:16 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:37.605 17:31:16 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:37.605 17:31:16 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:37.605 17:31:16 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:37.605 17:31:16 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:37.605 17:31:16 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:37.605 17:31:16 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:37.605 17:31:16 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:37.605 17:31:16 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:38.173 17:31:16 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:38.173 17:31:16 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:38.173 17:31:16 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.173 [2024-07-12 17:31:17.066504] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.173 17:31:17 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:38.434 17:31:17 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:38.692 [2024-07-12 17:31:17.535757] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.692 [2024-07-12 17:31:17.535962] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.692 17:31:17 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:38.951 malloc0 00:20:38.951 17:31:17 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.211 17:31:18 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:39.470 17:31:18 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:39.470 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.448 Initializing NVMe Controllers 00:20:49.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:49.448 Initialization complete. Launching workers. 00:20:49.448 ======================================================== 00:20:49.448 Latency(us) 00:20:49.448 Device Information : IOPS MiB/s Average min max 00:20:49.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11210.97 43.79 5709.68 1269.51 6331.58 00:20:49.448 ======================================================== 00:20:49.448 Total : 11210.97 43.79 5709.68 1269.51 6331.58 00:20:49.448 00:20:49.448 17:31:28 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:49.448 17:31:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.448 17:31:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:49.448 17:31:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:49.448 17:31:28 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:49.448 17:31:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.448 17:31:28 -- target/tls.sh@28 -- # bdevperf_pid=4152777 00:20:49.448 17:31:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.448 17:31:28 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.448 17:31:28 -- target/tls.sh@31 -- # waitforlisten 4152777 /var/tmp/bdevperf.sock 00:20:49.448 17:31:28 -- common/autotest_common.sh@819 -- # '[' -z 4152777 ']' 00:20:49.448 17:31:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.448 17:31:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:49.448 17:31:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.448 17:31:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:49.448 17:31:28 -- common/autotest_common.sh@10 -- # set +x 00:20:49.448 [2024-07-12 17:31:28.402324] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:49.448 [2024-07-12 17:31:28.402387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152777 ] 00:20:49.707 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.707 [2024-07-12 17:31:28.460611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.707 [2024-07-12 17:31:28.494986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.707 17:31:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.707 17:31:28 -- common/autotest_common.sh@852 -- # return 0 00:20:49.707 17:31:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:49.965 [2024-07-12 17:31:28.821164] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.965 TLSTESTn1 00:20:49.965 17:31:28 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:50.224 Running I/O for 10 seconds... 00:21:00.196 00:21:00.196 Latency(us) 00:21:00.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.196 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:00.196 Verification LBA range: start 0x0 length 0x2000 00:21:00.196 TLSTESTn1 : 10.02 4148.32 16.20 0.00 0.00 30826.54 3604.48 71493.82 00:21:00.196 =================================================================================================================== 00:21:00.196 Total : 4148.32 16.20 0.00 0.00 30826.54 3604.48 71493.82 00:21:00.196 0 00:21:00.196 17:31:39 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.196 17:31:39 -- target/tls.sh@45 -- # killprocess 4152777 00:21:00.196 17:31:39 -- common/autotest_common.sh@926 -- # '[' -z 4152777 ']' 00:21:00.196 17:31:39 -- common/autotest_common.sh@930 -- # kill -0 4152777 00:21:00.196 17:31:39 -- common/autotest_common.sh@931 -- # uname 00:21:00.196 17:31:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:00.196 17:31:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4152777 00:21:00.196 17:31:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:00.196 17:31:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:00.196 17:31:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4152777' 00:21:00.196 killing process with pid 4152777 00:21:00.196 17:31:39 -- common/autotest_common.sh@945 -- # kill 4152777 00:21:00.196 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.196 00:21:00.196 Latency(us) 00:21:00.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.196 =================================================================================================================== 00:21:00.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.196 17:31:39 -- common/autotest_common.sh@950 -- # wait 4152777 00:21:00.456 17:31:39 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:00.456 17:31:39 -- common/autotest_common.sh@640 -- # local es=0 00:21:00.456 17:31:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:00.456 17:31:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:00.456 17:31:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.456 17:31:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:00.456 17:31:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.456 17:31:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:00.456 17:31:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.456 17:31:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:00.456 17:31:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.456 17:31:39 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:21:00.456 17:31:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.456 17:31:39 -- target/tls.sh@28 -- # bdevperf_pid=4154649 00:21:00.456 17:31:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.456 17:31:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.456 17:31:39 -- target/tls.sh@31 -- # waitforlisten 4154649 /var/tmp/bdevperf.sock 00:21:00.456 17:31:39 -- common/autotest_common.sh@819 -- # '[' -z 4154649 ']' 00:21:00.456 17:31:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.456 17:31:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:00.456 17:31:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.456 17:31:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:00.456 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:21:00.456 [2024-07-12 17:31:39.339463] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:00.456 [2024-07-12 17:31:39.339528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154649 ] 00:21:00.456 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.456 [2024-07-12 17:31:39.399531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.715 [2024-07-12 17:31:39.434218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.715 17:31:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:00.715 17:31:39 -- common/autotest_common.sh@852 -- # return 0 00:21:00.715 17:31:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:00.974 [2024-07-12 17:31:39.764484] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.974 [2024-07-12 17:31:39.769031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:00.974 [2024-07-12 17:31:39.769615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f6b20 (107): Transport endpoint is not connected 00:21:00.974 [2024-07-12 17:31:39.770606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f6b20 (9): Bad file descriptor 00:21:00.974 [2024-07-12 17:31:39.771608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.974 [2024-07-12 17:31:39.771616] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:00.974 [2024-07-12 17:31:39.771623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.974 request: 00:21:00.974 { 00:21:00.974 "name": "TLSTEST", 00:21:00.974 "trtype": "tcp", 00:21:00.974 "traddr": "10.0.0.2", 00:21:00.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.974 "adrfam": "ipv4", 00:21:00.974 "trsvcid": "4420", 00:21:00.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.974 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:21:00.974 "method": "bdev_nvme_attach_controller", 00:21:00.974 "req_id": 1 00:21:00.974 } 00:21:00.974 Got JSON-RPC error response 00:21:00.974 response: 00:21:00.974 { 00:21:00.974 "code": -32602, 00:21:00.974 "message": "Invalid parameters" 00:21:00.974 } 00:21:00.974 17:31:39 -- target/tls.sh@36 -- # killprocess 4154649 00:21:00.974 17:31:39 -- common/autotest_common.sh@926 -- # '[' -z 4154649 ']' 00:21:00.974 17:31:39 -- common/autotest_common.sh@930 -- # kill -0 4154649 00:21:00.974 17:31:39 -- common/autotest_common.sh@931 -- # uname 00:21:00.974 17:31:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:00.974 17:31:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4154649 00:21:00.974 17:31:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:00.974 17:31:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:00.974 17:31:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4154649' 00:21:00.974 killing process with pid 4154649 00:21:00.974 17:31:39 -- common/autotest_common.sh@945 -- # kill 4154649 00:21:00.974 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.974 00:21:00.974 Latency(us) 00:21:00.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.974 =================================================================================================================== 00:21:00.974 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.974 17:31:39 -- common/autotest_common.sh@950 -- # wait 4154649 00:21:01.232 17:31:39 -- target/tls.sh@37 -- # return 1 00:21:01.232 17:31:39 -- common/autotest_common.sh@643 -- # es=1 00:21:01.232 17:31:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:01.232 17:31:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:01.232 17:31:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:01.232 17:31:39 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.232 17:31:39 -- common/autotest_common.sh@640 -- # local es=0 00:21:01.232 17:31:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.232 17:31:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:01.232 17:31:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.232 17:31:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:01.232 17:31:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.232 17:31:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.232 17:31:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.232 17:31:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.232 17:31:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:01.232 17:31:40 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:01.232 17:31:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.232 17:31:40 -- target/tls.sh@28 -- # bdevperf_pid=4154911 00:21:01.232 17:31:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.232 17:31:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.232 17:31:40 -- target/tls.sh@31 -- # waitforlisten 4154911 /var/tmp/bdevperf.sock 00:21:01.232 17:31:40 -- common/autotest_common.sh@819 -- # '[' -z 4154911 ']' 00:21:01.232 17:31:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.232 17:31:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.232 17:31:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.232 17:31:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.232 17:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:01.232 [2024-07-12 17:31:40.048631] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:01.232 [2024-07-12 17:31:40.048695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154911 ] 00:21:01.232 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.232 [2024-07-12 17:31:40.109441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.232 [2024-07-12 17:31:40.143349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.490 17:31:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:01.490 17:31:40 -- common/autotest_common.sh@852 -- # return 0 00:21:01.490 17:31:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.750 [2024-07-12 17:31:40.461637] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.750 [2024-07-12 17:31:40.469490] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:01.750 [2024-07-12 17:31:40.469516] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:01.750 [2024-07-12 17:31:40.469544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:01.750 [2024-07-12 17:31:40.469817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf10b20 (107): Transport endpoint is not connected 00:21:01.750 [2024-07-12 17:31:40.470811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf10b20 (9): Bad file descriptor 00:21:01.750 [2024-07-12 17:31:40.471812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.750 [2024-07-12 17:31:40.471822] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:01.750 [2024-07-12 17:31:40.471828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.750 request: 00:21:01.750 { 00:21:01.750 "name": "TLSTEST", 00:21:01.750 "trtype": "tcp", 00:21:01.750 "traddr": "10.0.0.2", 00:21:01.750 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.750 "adrfam": "ipv4", 00:21:01.750 "trsvcid": "4420", 00:21:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.750 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:01.750 "method": "bdev_nvme_attach_controller", 00:21:01.750 "req_id": 1 00:21:01.750 } 00:21:01.750 Got JSON-RPC error response 00:21:01.750 response: 00:21:01.750 { 00:21:01.750 "code": -32602, 00:21:01.750 "message": "Invalid parameters" 00:21:01.750 } 00:21:01.750 17:31:40 -- target/tls.sh@36 -- # killprocess 4154911 00:21:01.750 17:31:40 -- common/autotest_common.sh@926 -- # '[' -z 4154911 ']' 00:21:01.750 17:31:40 -- common/autotest_common.sh@930 -- # kill -0 4154911 00:21:01.750 17:31:40 -- common/autotest_common.sh@931 -- # uname 00:21:01.750 17:31:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:01.750 17:31:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4154911 00:21:01.750 17:31:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:01.750 17:31:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:01.750 17:31:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4154911' 00:21:01.750 killing process with pid 4154911 00:21:01.750 17:31:40 -- common/autotest_common.sh@945 -- # kill 4154911 00:21:01.750 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.750 00:21:01.750 Latency(us) 00:21:01.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.750 =================================================================================================================== 00:21:01.750 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.750 17:31:40 -- common/autotest_common.sh@950 -- # wait 4154911 00:21:01.750 17:31:40 -- target/tls.sh@37 -- # return 1 00:21:01.750 17:31:40 -- common/autotest_common.sh@643 -- # es=1 00:21:01.750 17:31:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:01.750 17:31:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:01.750 17:31:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:01.750 17:31:40 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.750 17:31:40 -- common/autotest_common.sh@640 -- # local es=0 00:21:01.750 17:31:40 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.750 17:31:40 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:01.750 17:31:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.750 17:31:40 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:01.750 17:31:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.750 17:31:40 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.750 17:31:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.750 17:31:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:01.750 17:31:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.750 17:31:40 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:01.750 17:31:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.750 17:31:40 -- target/tls.sh@28 -- # bdevperf_pid=4154927 00:21:01.750 17:31:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.750 17:31:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.750 17:31:40 -- target/tls.sh@31 -- # waitforlisten 4154927 /var/tmp/bdevperf.sock 00:21:01.750 17:31:40 -- common/autotest_common.sh@819 -- # '[' -z 4154927 ']' 00:21:01.750 17:31:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.750 17:31:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.750 17:31:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.750 17:31:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.750 17:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:02.010 [2024-07-12 17:31:40.745908] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:02.010 [2024-07-12 17:31:40.745967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154927 ] 00:21:02.010 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.010 [2024-07-12 17:31:40.803805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.010 [2024-07-12 17:31:40.835413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.010 17:31:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.010 17:31:40 -- common/autotest_common.sh@852 -- # return 0 00:21:02.010 17:31:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:02.269 [2024-07-12 17:31:41.161564] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.269 [2024-07-12 17:31:41.171354] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:02.269 [2024-07-12 17:31:41.171380] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:02.269 [2024-07-12 17:31:41.171423] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:02.269 [2024-07-12 17:31:41.171742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dcb20 (107): Transport endpoint is not connected 00:21:02.269 [2024-07-12 17:31:41.172736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dcb20 (9): Bad file descriptor 00:21:02.269 [2024-07-12 17:31:41.173737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:02.269 [2024-07-12 17:31:41.173747] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:02.269 [2024-07-12 17:31:41.173753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:02.269 request: 00:21:02.269 { 00:21:02.269 "name": "TLSTEST", 00:21:02.269 "trtype": "tcp", 00:21:02.269 "traddr": "10.0.0.2", 00:21:02.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.269 "adrfam": "ipv4", 00:21:02.269 "trsvcid": "4420", 00:21:02.269 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:02.269 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:02.269 "method": "bdev_nvme_attach_controller", 00:21:02.269 "req_id": 1 00:21:02.269 } 00:21:02.269 Got JSON-RPC error response 00:21:02.269 response: 00:21:02.269 { 00:21:02.269 "code": -32602, 00:21:02.269 "message": "Invalid parameters" 00:21:02.269 } 00:21:02.269 17:31:41 -- target/tls.sh@36 -- # killprocess 4154927 00:21:02.269 17:31:41 -- common/autotest_common.sh@926 -- # '[' -z 4154927 ']' 00:21:02.269 17:31:41 -- common/autotest_common.sh@930 -- # kill -0 4154927 00:21:02.269 17:31:41 -- common/autotest_common.sh@931 -- # uname 00:21:02.269 17:31:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:02.269 17:31:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4154927 00:21:02.529 17:31:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:02.529 17:31:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:02.529 17:31:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4154927' 00:21:02.529 killing process with pid 4154927 00:21:02.529 17:31:41 -- common/autotest_common.sh@945 -- # kill 4154927 00:21:02.529 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.529 00:21:02.529 Latency(us) 00:21:02.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.529 =================================================================================================================== 00:21:02.529 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.529 17:31:41 -- common/autotest_common.sh@950 -- # wait 4154927 00:21:02.529 17:31:41 -- target/tls.sh@37 -- # return 1 00:21:02.529 17:31:41 -- common/autotest_common.sh@643 -- # es=1 00:21:02.529 17:31:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:02.529 17:31:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:02.529 17:31:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:02.529 17:31:41 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.529 17:31:41 -- common/autotest_common.sh@640 -- # local es=0 00:21:02.529 17:31:41 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.529 17:31:41 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:02.529 17:31:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:02.529 17:31:41 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:02.529 17:31:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:02.529 17:31:41 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.529 17:31:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.529 17:31:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.529 17:31:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.529 17:31:41 -- target/tls.sh@23 -- # psk= 00:21:02.529 17:31:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.529 17:31:41 -- target/tls.sh@28 -- # bdevperf_pid=4155193 00:21:02.529 17:31:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.529 17:31:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.529 17:31:41 -- target/tls.sh@31 -- # waitforlisten 4155193 /var/tmp/bdevperf.sock 00:21:02.529 17:31:41 -- common/autotest_common.sh@819 -- # '[' -z 4155193 ']' 00:21:02.529 17:31:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.529 17:31:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:02.529 17:31:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.529 17:31:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:02.529 17:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:02.529 [2024-07-12 17:31:41.450304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:02.529 [2024-07-12 17:31:41.450367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155193 ] 00:21:02.529 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.789 [2024-07-12 17:31:41.509521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.789 [2024-07-12 17:31:41.543310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.789 17:31:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.789 17:31:41 -- common/autotest_common.sh@852 -- # return 0 00:21:02.789 17:31:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:03.049 [2024-07-12 17:31:41.873193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.049 [2024-07-12 17:31:41.874976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb891f0 (9): Bad file descriptor 00:21:03.049 [2024-07-12 17:31:41.875976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.049 [2024-07-12 17:31:41.875984] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:03.049 [2024-07-12 17:31:41.875991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.049 request: 00:21:03.049 { 00:21:03.049 "name": "TLSTEST", 00:21:03.049 "trtype": "tcp", 00:21:03.049 "traddr": "10.0.0.2", 00:21:03.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.049 "adrfam": "ipv4", 00:21:03.049 "trsvcid": "4420", 00:21:03.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.049 "method": "bdev_nvme_attach_controller", 00:21:03.049 "req_id": 1 00:21:03.049 } 00:21:03.049 Got JSON-RPC error response 00:21:03.049 response: 00:21:03.049 { 00:21:03.049 "code": -32602, 00:21:03.049 "message": "Invalid parameters" 00:21:03.049 } 00:21:03.049 17:31:41 -- target/tls.sh@36 -- # killprocess 4155193 00:21:03.049 17:31:41 -- common/autotest_common.sh@926 -- # '[' -z 4155193 ']' 00:21:03.049 17:31:41 -- common/autotest_common.sh@930 -- # kill -0 4155193 00:21:03.049 17:31:41 -- common/autotest_common.sh@931 -- # uname 00:21:03.049 17:31:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:03.049 17:31:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4155193 00:21:03.049 17:31:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:03.049 17:31:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:03.049 17:31:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4155193' 00:21:03.049 killing process with pid 4155193 00:21:03.049 17:31:41 -- common/autotest_common.sh@945 -- # kill 4155193 00:21:03.049 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.049 00:21:03.049 Latency(us) 00:21:03.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.049 =================================================================================================================== 00:21:03.049 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.049 17:31:41 -- common/autotest_common.sh@950 -- # wait 4155193 00:21:03.308 17:31:42 -- target/tls.sh@37 -- # return 1 00:21:03.308 17:31:42 -- common/autotest_common.sh@643 -- # es=1 00:21:03.308 17:31:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:03.308 17:31:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:03.308 17:31:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:03.308 17:31:42 -- target/tls.sh@167 -- # killprocess 4149885 00:21:03.308 17:31:42 -- common/autotest_common.sh@926 -- # '[' -z 4149885 ']' 00:21:03.308 17:31:42 -- common/autotest_common.sh@930 -- # kill -0 4149885 00:21:03.308 17:31:42 -- common/autotest_common.sh@931 -- # uname 00:21:03.308 17:31:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:03.308 17:31:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4149885 00:21:03.308 17:31:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:03.308 17:31:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:03.308 17:31:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4149885' 00:21:03.308 killing process with pid 4149885 00:21:03.308 17:31:42 -- common/autotest_common.sh@945 -- # kill 4149885 00:21:03.308 17:31:42 -- common/autotest_common.sh@950 -- # wait 4149885 00:21:03.568 17:31:42 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:21:03.568 17:31:42 -- target/tls.sh@49 -- # local key hash crc 00:21:03.568 17:31:42 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:03.568 17:31:42 -- target/tls.sh@51 -- # hash=02 00:21:03.568 17:31:42 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:21:03.568 17:31:42 -- target/tls.sh@52 -- # head -c 4 00:21:03.568 17:31:42 -- target/tls.sh@52 -- # gzip -1 -c 00:21:03.568 17:31:42 -- target/tls.sh@52 -- # tail -c8 00:21:03.568 17:31:42 -- target/tls.sh@52 -- # crc='�e�'\''' 00:21:03.568 17:31:42 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:03.568 17:31:42 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:21:03.568 17:31:42 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:03.568 17:31:42 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:03.568 17:31:42 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:03.568 17:31:42 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:03.568 17:31:42 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:03.568 17:31:42 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:21:03.568 17:31:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:03.568 17:31:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:03.568 17:31:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.568 17:31:42 -- nvmf/common.sh@469 -- # nvmfpid=4155248 00:21:03.568 17:31:42 -- nvmf/common.sh@470 -- # waitforlisten 4155248 00:21:03.568 17:31:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.568 17:31:42 -- common/autotest_common.sh@819 -- # '[' -z 4155248 ']' 00:21:03.568 17:31:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.568 17:31:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:03.568 17:31:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.568 17:31:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:03.568 17:31:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.568 [2024-07-12 17:31:42.423118] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:03.568 [2024-07-12 17:31:42.423175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.568 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.568 [2024-07-12 17:31:42.500132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.827 [2024-07-12 17:31:42.540088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:03.827 [2024-07-12 17:31:42.540230] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.827 [2024-07-12 17:31:42.540241] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.827 [2024-07-12 17:31:42.540250] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.827 [2024-07-12 17:31:42.540282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.393 17:31:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.393 17:31:43 -- common/autotest_common.sh@852 -- # return 0 00:21:04.393 17:31:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:04.393 17:31:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:04.393 17:31:43 -- common/autotest_common.sh@10 -- # set +x 00:21:04.651 17:31:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.651 17:31:43 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:04.651 17:31:43 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:04.651 17:31:43 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:04.651 [2024-07-12 17:31:43.595552] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.651 17:31:43 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:04.909 17:31:43 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:05.167 [2024-07-12 17:31:44.060806] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.167 [2024-07-12 17:31:44.061017] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.167 17:31:44 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:05.427 malloc0 00:21:05.427 17:31:44 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:05.686 17:31:44 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:05.945 17:31:44 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:05.945 17:31:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:05.945 17:31:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:05.945 17:31:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:05.945 17:31:44 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:05.945 17:31:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.945 17:31:44 -- target/tls.sh@28 -- # bdevperf_pid=4155775 00:21:05.945 17:31:44 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:05.945 17:31:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:05.945 17:31:44 -- target/tls.sh@31 -- # waitforlisten 4155775 /var/tmp/bdevperf.sock 00:21:05.945 17:31:44 -- common/autotest_common.sh@819 -- # '[' -z 4155775 ']' 00:21:05.945 17:31:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.945 17:31:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:05.945 17:31:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.945 17:31:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:05.945 17:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:05.945 [2024-07-12 17:31:44.822090] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:05.945 [2024-07-12 17:31:44.822149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155775 ] 00:21:05.945 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.945 [2024-07-12 17:31:44.880202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.204 [2024-07-12 17:31:44.915868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.204 17:31:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:06.204 17:31:45 -- common/autotest_common.sh@852 -- # return 0 00:21:06.204 17:31:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:06.463 [2024-07-12 17:31:45.230091] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.463 TLSTESTn1 00:21:06.463 17:31:45 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:06.463 Running I/O for 10 seconds... 00:21:18.676 00:21:18.676 Latency(us) 00:21:18.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.676 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.676 Verification LBA range: start 0x0 length 0x2000 00:21:18.676 TLSTESTn1 : 10.02 5242.40 20.48 0.00 0.00 24388.61 3559.80 71970.44 00:21:18.676 =================================================================================================================== 00:21:18.676 Total : 5242.40 20.48 0.00 0.00 24388.61 3559.80 71970.44 00:21:18.676 0 00:21:18.676 17:31:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.676 17:31:55 -- target/tls.sh@45 -- # killprocess 4155775 00:21:18.676 17:31:55 -- common/autotest_common.sh@926 -- # '[' -z 4155775 ']' 00:21:18.676 17:31:55 -- common/autotest_common.sh@930 -- # kill -0 4155775 00:21:18.676 17:31:55 -- common/autotest_common.sh@931 -- # uname 00:21:18.676 17:31:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:18.676 17:31:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4155775 00:21:18.676 17:31:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:18.676 17:31:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:18.676 17:31:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4155775' 00:21:18.676 killing process with pid 4155775 00:21:18.676 17:31:55 -- common/autotest_common.sh@945 -- # kill 4155775 00:21:18.676 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.676 00:21:18.676 Latency(us) 00:21:18.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.676 =================================================================================================================== 00:21:18.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.676 17:31:55 -- common/autotest_common.sh@950 -- # wait 4155775 00:21:18.676 17:31:55 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.676 17:31:55 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.676 17:31:55 -- common/autotest_common.sh@640 -- # local es=0 00:21:18.676 17:31:55 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.676 17:31:55 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:18.676 17:31:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.676 17:31:55 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:18.676 17:31:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.676 17:31:55 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.676 17:31:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.676 17:31:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.676 17:31:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.676 17:31:55 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:18.676 17:31:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.676 17:31:55 -- target/tls.sh@28 -- # bdevperf_pid=4157650 00:21:18.676 17:31:55 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.676 17:31:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.676 17:31:55 -- target/tls.sh@31 -- # waitforlisten 4157650 /var/tmp/bdevperf.sock 00:21:18.676 17:31:55 -- common/autotest_common.sh@819 -- # '[' -z 4157650 ']' 00:21:18.676 17:31:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.676 17:31:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:18.676 17:31:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.676 17:31:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:18.676 17:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:18.676 [2024-07-12 17:31:55.750509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:18.676 [2024-07-12 17:31:55.750572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157650 ] 00:21:18.676 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.676 [2024-07-12 17:31:55.809260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.676 [2024-07-12 17:31:55.840715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.676 17:31:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:18.676 17:31:55 -- common/autotest_common.sh@852 -- # return 0 00:21:18.676 17:31:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.676 [2024-07-12 17:31:56.170857] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.676 [2024-07-12 17:31:56.170897] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:18.676 request: 00:21:18.676 { 00:21:18.676 "name": "TLSTEST", 00:21:18.676 "trtype": "tcp", 00:21:18.676 "traddr": "10.0.0.2", 00:21:18.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.676 "adrfam": "ipv4", 00:21:18.676 "trsvcid": "4420", 00:21:18.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.676 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:18.676 "method": "bdev_nvme_attach_controller", 00:21:18.676 "req_id": 1 00:21:18.676 } 00:21:18.676 Got JSON-RPC error response 00:21:18.676 response: 00:21:18.676 { 00:21:18.676 "code": -22, 00:21:18.676 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:18.676 } 00:21:18.676 17:31:56 -- target/tls.sh@36 -- # killprocess 4157650 00:21:18.676 17:31:56 -- common/autotest_common.sh@926 -- # '[' -z 4157650 ']' 00:21:18.676 17:31:56 -- common/autotest_common.sh@930 -- # kill -0 4157650 00:21:18.676 17:31:56 -- common/autotest_common.sh@931 -- # uname 00:21:18.676 17:31:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:18.676 17:31:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4157650 00:21:18.676 17:31:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:18.676 17:31:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:18.676 17:31:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4157650' 00:21:18.676 killing process with pid 4157650 00:21:18.676 17:31:56 -- common/autotest_common.sh@945 -- # kill 4157650 00:21:18.676 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.676 00:21:18.676 Latency(us) 00:21:18.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.677 =================================================================================================================== 00:21:18.677 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.677 17:31:56 -- common/autotest_common.sh@950 -- # wait 4157650 00:21:18.677 17:31:56 -- target/tls.sh@37 -- # return 1 00:21:18.677 17:31:56 -- common/autotest_common.sh@643 -- # es=1 00:21:18.677 17:31:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:18.677 17:31:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:18.677 17:31:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:18.677 17:31:56 -- target/tls.sh@183 -- # killprocess 4155248 00:21:18.677 17:31:56 -- common/autotest_common.sh@926 -- # '[' -z 4155248 ']' 00:21:18.677 17:31:56 -- common/autotest_common.sh@930 -- # kill -0 4155248 00:21:18.677 17:31:56 -- common/autotest_common.sh@931 -- # uname 00:21:18.677 17:31:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:18.677 17:31:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4155248 00:21:18.677 17:31:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:18.677 17:31:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:18.677 17:31:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4155248' 00:21:18.677 killing process with pid 4155248 00:21:18.677 17:31:56 -- common/autotest_common.sh@945 -- # kill 4155248 00:21:18.677 17:31:56 -- common/autotest_common.sh@950 -- # wait 4155248 00:21:18.677 17:31:56 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:18.677 17:31:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:18.677 17:31:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:18.677 17:31:56 -- common/autotest_common.sh@10 -- # set +x 00:21:18.677 17:31:56 -- nvmf/common.sh@469 -- # nvmfpid=4157917 00:21:18.677 17:31:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:18.677 17:31:56 -- nvmf/common.sh@470 -- # waitforlisten 4157917 00:21:18.677 17:31:56 -- common/autotest_common.sh@819 -- # '[' -z 4157917 ']' 00:21:18.677 17:31:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.677 17:31:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:18.677 17:31:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.677 17:31:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:18.677 17:31:56 -- common/autotest_common.sh@10 -- # set +x 00:21:18.677 [2024-07-12 17:31:56.691348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:18.677 [2024-07-12 17:31:56.691406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.677 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.677 [2024-07-12 17:31:56.769178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.677 [2024-07-12 17:31:56.810205] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:18.677 [2024-07-12 17:31:56.810358] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.677 [2024-07-12 17:31:56.810370] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.677 [2024-07-12 17:31:56.810379] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.677 [2024-07-12 17:31:56.810399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.677 17:31:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:18.677 17:31:57 -- common/autotest_common.sh@852 -- # return 0 00:21:18.677 17:31:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:18.677 17:31:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:18.677 17:31:57 -- common/autotest_common.sh@10 -- # set +x 00:21:18.937 17:31:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.937 17:31:57 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.937 17:31:57 -- common/autotest_common.sh@640 -- # local es=0 00:21:18.937 17:31:57 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.937 17:31:57 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:21:18.937 17:31:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.937 17:31:57 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:21:18.937 17:31:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.937 17:31:57 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.937 17:31:57 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.937 17:31:57 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.937 [2024-07-12 17:31:57.805402] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.937 17:31:57 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:19.195 17:31:58 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:19.764 [2024-07-12 17:31:58.507309] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.764 [2024-07-12 17:31:58.507525] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.764 17:31:58 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:20.023 malloc0 00:21:20.023 17:31:58 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:20.282 17:31:59 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:20.541 [2024-07-12 17:31:59.455195] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:20.541 [2024-07-12 17:31:59.455228] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:20.541 [2024-07-12 17:31:59.455250] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:20.541 request: 00:21:20.541 { 00:21:20.541 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.541 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.541 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:20.541 "method": "nvmf_subsystem_add_host", 00:21:20.541 "req_id": 1 00:21:20.541 } 00:21:20.541 Got JSON-RPC error response 00:21:20.541 response: 00:21:20.541 { 00:21:20.541 "code": -32603, 00:21:20.541 "message": "Internal error" 00:21:20.541 } 00:21:20.541 17:31:59 -- common/autotest_common.sh@643 -- # es=1 00:21:20.541 17:31:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:20.541 17:31:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:20.541 17:31:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:20.541 17:31:59 -- target/tls.sh@189 -- # killprocess 4157917 00:21:20.541 17:31:59 -- common/autotest_common.sh@926 -- # '[' -z 4157917 ']' 00:21:20.541 17:31:59 -- common/autotest_common.sh@930 -- # kill -0 4157917 00:21:20.541 17:31:59 -- common/autotest_common.sh@931 -- # uname 00:21:20.541 17:31:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:20.541 17:31:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4157917 00:21:20.800 17:31:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:20.800 17:31:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:20.800 17:31:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4157917' 00:21:20.800 killing process with pid 4157917 00:21:20.800 17:31:59 -- common/autotest_common.sh@945 -- # kill 4157917 00:21:20.800 17:31:59 -- common/autotest_common.sh@950 -- # wait 4157917 00:21:20.800 17:31:59 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:20.800 17:31:59 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:21:20.800 17:31:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:20.800 17:31:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:20.800 17:31:59 -- common/autotest_common.sh@10 -- # set +x 00:21:20.800 17:31:59 -- nvmf/common.sh@469 -- # nvmfpid=4158485 00:21:20.800 17:31:59 -- nvmf/common.sh@470 -- # waitforlisten 4158485 00:21:20.800 17:31:59 -- common/autotest_common.sh@819 -- # '[' -z 4158485 ']' 00:21:20.800 17:31:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.800 17:31:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:20.800 17:31:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.800 17:31:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:20.800 17:31:59 -- common/autotest_common.sh@10 -- # set +x 00:21:20.800 17:31:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:21.060 [2024-07-12 17:31:59.771226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:21.060 [2024-07-12 17:31:59.771289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.060 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.060 [2024-07-12 17:31:59.845855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.060 [2024-07-12 17:31:59.887504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:21.060 [2024-07-12 17:31:59.887642] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.060 [2024-07-12 17:31:59.887653] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.060 [2024-07-12 17:31:59.887662] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.060 [2024-07-12 17:31:59.887680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.319 17:32:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:21.319 17:32:00 -- common/autotest_common.sh@852 -- # return 0 00:21:21.319 17:32:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:21.319 17:32:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:21.319 17:32:00 -- common/autotest_common.sh@10 -- # set +x 00:21:21.319 17:32:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.319 17:32:00 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:21.319 17:32:00 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:21.319 17:32:00 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.578 [2024-07-12 17:32:00.457480] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.578 17:32:00 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:21.836 17:32:00 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:22.095 [2024-07-12 17:32:00.930750] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.095 [2024-07-12 17:32:00.930956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.095 17:32:00 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:22.353 malloc0 00:21:22.353 17:32:01 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:22.611 17:32:01 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:22.868 17:32:01 -- target/tls.sh@197 -- # bdevperf_pid=4158782 00:21:22.868 17:32:01 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.868 17:32:01 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.868 17:32:01 -- target/tls.sh@200 -- # waitforlisten 4158782 /var/tmp/bdevperf.sock 00:21:22.868 17:32:01 -- common/autotest_common.sh@819 -- # '[' -z 4158782 ']' 00:21:22.868 17:32:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.868 17:32:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:22.868 17:32:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.868 17:32:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:22.868 17:32:01 -- common/autotest_common.sh@10 -- # set +x 00:21:22.868 [2024-07-12 17:32:01.700866] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:22.868 [2024-07-12 17:32:01.700926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158782 ] 00:21:22.868 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.868 [2024-07-12 17:32:01.760363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.868 [2024-07-12 17:32:01.796196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.801 17:32:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:23.801 17:32:02 -- common/autotest_common.sh@852 -- # return 0 00:21:23.801 17:32:02 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:23.801 [2024-07-12 17:32:02.769119] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.062 TLSTESTn1 00:21:24.062 17:32:02 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:24.321 17:32:03 -- target/tls.sh@205 -- # tgtconf='{ 00:21:24.321 "subsystems": [ 00:21:24.321 { 00:21:24.321 "subsystem": "iobuf", 00:21:24.321 "config": [ 00:21:24.321 { 00:21:24.321 "method": "iobuf_set_options", 00:21:24.321 "params": { 00:21:24.321 "small_pool_count": 8192, 00:21:24.321 "large_pool_count": 1024, 00:21:24.321 "small_bufsize": 8192, 00:21:24.321 "large_bufsize": 135168 00:21:24.321 } 00:21:24.321 } 00:21:24.321 ] 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "subsystem": "sock", 00:21:24.321 "config": [ 00:21:24.321 { 00:21:24.321 "method": "sock_impl_set_options", 00:21:24.321 "params": { 00:21:24.321 "impl_name": "posix", 00:21:24.321 "recv_buf_size": 2097152, 00:21:24.321 "send_buf_size": 2097152, 00:21:24.321 "enable_recv_pipe": true, 00:21:24.321 "enable_quickack": false, 00:21:24.321 "enable_placement_id": 0, 00:21:24.321 "enable_zerocopy_send_server": true, 00:21:24.321 "enable_zerocopy_send_client": false, 00:21:24.321 "zerocopy_threshold": 0, 00:21:24.321 "tls_version": 0, 00:21:24.321 "enable_ktls": false 00:21:24.321 } 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "method": "sock_impl_set_options", 00:21:24.321 "params": { 00:21:24.321 "impl_name": "ssl", 00:21:24.321 "recv_buf_size": 4096, 00:21:24.321 "send_buf_size": 4096, 00:21:24.321 "enable_recv_pipe": true, 00:21:24.321 "enable_quickack": false, 00:21:24.321 "enable_placement_id": 0, 00:21:24.321 "enable_zerocopy_send_server": true, 00:21:24.321 "enable_zerocopy_send_client": false, 00:21:24.321 "zerocopy_threshold": 0, 00:21:24.321 "tls_version": 0, 00:21:24.321 "enable_ktls": false 00:21:24.321 } 00:21:24.321 } 00:21:24.321 ] 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "subsystem": "vmd", 00:21:24.321 "config": [] 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "subsystem": "accel", 00:21:24.321 "config": [ 00:21:24.321 { 00:21:24.321 "method": "accel_set_options", 00:21:24.321 "params": { 00:21:24.321 "small_cache_size": 128, 00:21:24.321 "large_cache_size": 16, 00:21:24.321 "task_count": 2048, 00:21:24.321 "sequence_count": 2048, 00:21:24.321 "buf_count": 2048 00:21:24.321 } 00:21:24.321 } 00:21:24.321 ] 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "subsystem": "bdev", 00:21:24.321 "config": [ 00:21:24.321 { 00:21:24.321 "method": "bdev_set_options", 00:21:24.321 "params": { 00:21:24.321 "bdev_io_pool_size": 65535, 00:21:24.321 "bdev_io_cache_size": 256, 00:21:24.321 "bdev_auto_examine": true, 00:21:24.321 "iobuf_small_cache_size": 128, 00:21:24.321 "iobuf_large_cache_size": 16 00:21:24.321 } 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "method": "bdev_raid_set_options", 00:21:24.321 "params": { 00:21:24.321 "process_window_size_kb": 1024 00:21:24.321 } 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "method": "bdev_iscsi_set_options", 00:21:24.321 "params": { 00:21:24.321 "timeout_sec": 30 00:21:24.321 } 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "method": "bdev_nvme_set_options", 00:21:24.321 "params": { 00:21:24.321 "action_on_timeout": "none", 00:21:24.321 "timeout_us": 0, 00:21:24.321 "timeout_admin_us": 0, 00:21:24.321 "keep_alive_timeout_ms": 10000, 00:21:24.321 "transport_retry_count": 4, 00:21:24.321 "arbitration_burst": 0, 00:21:24.321 "low_priority_weight": 0, 00:21:24.321 "medium_priority_weight": 0, 00:21:24.321 "high_priority_weight": 0, 00:21:24.321 "nvme_adminq_poll_period_us": 10000, 00:21:24.321 "nvme_ioq_poll_period_us": 0, 00:21:24.321 "io_queue_requests": 0, 00:21:24.321 "delay_cmd_submit": true, 00:21:24.321 "bdev_retry_count": 3, 00:21:24.321 "transport_ack_timeout": 0, 00:21:24.321 "ctrlr_loss_timeout_sec": 0, 00:21:24.321 "reconnect_delay_sec": 0, 00:21:24.321 "fast_io_fail_timeout_sec": 0, 00:21:24.321 "generate_uuids": false, 00:21:24.321 "transport_tos": 0, 00:21:24.321 "io_path_stat": false, 00:21:24.321 "allow_accel_sequence": false 00:21:24.321 } 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "method": "bdev_nvme_set_hotplug", 00:21:24.321 "params": { 00:21:24.321 "period_us": 100000, 00:21:24.321 "enable": false 00:21:24.321 } 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "method": "bdev_malloc_create", 00:21:24.321 "params": { 00:21:24.321 "name": "malloc0", 00:21:24.321 "num_blocks": 8192, 00:21:24.321 "block_size": 4096, 00:21:24.321 "physical_block_size": 4096, 00:21:24.321 "uuid": "e31bccd4-bf8a-447c-9264-8fde6e14ca4c", 00:21:24.321 "optimal_io_boundary": 0 00:21:24.321 } 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "method": "bdev_wait_for_examine" 00:21:24.321 } 00:21:24.321 ] 00:21:24.321 }, 00:21:24.321 { 00:21:24.321 "subsystem": "nbd", 00:21:24.322 "config": [] 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "subsystem": "scheduler", 00:21:24.322 "config": [ 00:21:24.322 { 00:21:24.322 "method": "framework_set_scheduler", 00:21:24.322 "params": { 00:21:24.322 "name": "static" 00:21:24.322 } 00:21:24.322 } 00:21:24.322 ] 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "subsystem": "nvmf", 00:21:24.322 "config": [ 00:21:24.322 { 00:21:24.322 "method": "nvmf_set_config", 00:21:24.322 "params": { 00:21:24.322 "discovery_filter": "match_any", 00:21:24.322 "admin_cmd_passthru": { 00:21:24.322 "identify_ctrlr": false 00:21:24.322 } 00:21:24.322 } 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "method": "nvmf_set_max_subsystems", 00:21:24.322 "params": { 00:21:24.322 "max_subsystems": 1024 00:21:24.322 } 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "method": "nvmf_set_crdt", 00:21:24.322 "params": { 00:21:24.322 "crdt1": 0, 00:21:24.322 "crdt2": 0, 00:21:24.322 "crdt3": 0 00:21:24.322 } 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "method": "nvmf_create_transport", 00:21:24.322 "params": { 00:21:24.322 "trtype": "TCP", 00:21:24.322 "max_queue_depth": 128, 00:21:24.322 "max_io_qpairs_per_ctrlr": 127, 00:21:24.322 "in_capsule_data_size": 4096, 00:21:24.322 "max_io_size": 131072, 00:21:24.322 "io_unit_size": 131072, 00:21:24.322 "max_aq_depth": 128, 00:21:24.322 "num_shared_buffers": 511, 00:21:24.322 "buf_cache_size": 4294967295, 00:21:24.322 "dif_insert_or_strip": false, 00:21:24.322 "zcopy": false, 00:21:24.322 "c2h_success": false, 00:21:24.322 "sock_priority": 0, 00:21:24.322 "abort_timeout_sec": 1 00:21:24.322 } 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "method": "nvmf_create_subsystem", 00:21:24.322 "params": { 00:21:24.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.322 "allow_any_host": false, 00:21:24.322 "serial_number": "SPDK00000000000001", 00:21:24.322 "model_number": "SPDK bdev Controller", 00:21:24.322 "max_namespaces": 10, 00:21:24.322 "min_cntlid": 1, 00:21:24.322 "max_cntlid": 65519, 00:21:24.322 "ana_reporting": false 00:21:24.322 } 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "method": "nvmf_subsystem_add_host", 00:21:24.322 "params": { 00:21:24.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.322 "host": "nqn.2016-06.io.spdk:host1", 00:21:24.322 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:24.322 } 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "method": "nvmf_subsystem_add_ns", 00:21:24.322 "params": { 00:21:24.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.322 "namespace": { 00:21:24.322 "nsid": 1, 00:21:24.322 "bdev_name": "malloc0", 00:21:24.322 "nguid": "E31BCCD4BF8A447C92648FDE6E14CA4C", 00:21:24.322 "uuid": "e31bccd4-bf8a-447c-9264-8fde6e14ca4c" 00:21:24.322 } 00:21:24.322 } 00:21:24.322 }, 00:21:24.322 { 00:21:24.322 "method": "nvmf_subsystem_add_listener", 00:21:24.322 "params": { 00:21:24.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.322 "listen_address": { 00:21:24.322 "trtype": "TCP", 00:21:24.322 "adrfam": "IPv4", 00:21:24.322 "traddr": "10.0.0.2", 00:21:24.322 "trsvcid": "4420" 00:21:24.322 }, 00:21:24.322 "secure_channel": true 00:21:24.322 } 00:21:24.322 } 00:21:24.322 ] 00:21:24.322 } 00:21:24.322 ] 00:21:24.322 }' 00:21:24.322 17:32:03 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:24.581 17:32:03 -- target/tls.sh@206 -- # bdevperfconf='{ 00:21:24.581 "subsystems": [ 00:21:24.581 { 00:21:24.581 "subsystem": "iobuf", 00:21:24.581 "config": [ 00:21:24.581 { 00:21:24.581 "method": "iobuf_set_options", 00:21:24.581 "params": { 00:21:24.581 "small_pool_count": 8192, 00:21:24.581 "large_pool_count": 1024, 00:21:24.581 "small_bufsize": 8192, 00:21:24.581 "large_bufsize": 135168 00:21:24.581 } 00:21:24.581 } 00:21:24.581 ] 00:21:24.581 }, 00:21:24.581 { 00:21:24.581 "subsystem": "sock", 00:21:24.582 "config": [ 00:21:24.582 { 00:21:24.582 "method": "sock_impl_set_options", 00:21:24.582 "params": { 00:21:24.582 "impl_name": "posix", 00:21:24.582 "recv_buf_size": 2097152, 00:21:24.582 "send_buf_size": 2097152, 00:21:24.582 "enable_recv_pipe": true, 00:21:24.582 "enable_quickack": false, 00:21:24.582 "enable_placement_id": 0, 00:21:24.582 "enable_zerocopy_send_server": true, 00:21:24.582 "enable_zerocopy_send_client": false, 00:21:24.582 "zerocopy_threshold": 0, 00:21:24.582 "tls_version": 0, 00:21:24.582 "enable_ktls": false 00:21:24.582 } 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "method": "sock_impl_set_options", 00:21:24.582 "params": { 00:21:24.582 "impl_name": "ssl", 00:21:24.582 "recv_buf_size": 4096, 00:21:24.582 "send_buf_size": 4096, 00:21:24.582 "enable_recv_pipe": true, 00:21:24.582 "enable_quickack": false, 00:21:24.582 "enable_placement_id": 0, 00:21:24.582 "enable_zerocopy_send_server": true, 00:21:24.582 "enable_zerocopy_send_client": false, 00:21:24.582 "zerocopy_threshold": 0, 00:21:24.582 "tls_version": 0, 00:21:24.582 "enable_ktls": false 00:21:24.582 } 00:21:24.582 } 00:21:24.582 ] 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "subsystem": "vmd", 00:21:24.582 "config": [] 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "subsystem": "accel", 00:21:24.582 "config": [ 00:21:24.582 { 00:21:24.582 "method": "accel_set_options", 00:21:24.582 "params": { 00:21:24.582 "small_cache_size": 128, 00:21:24.582 "large_cache_size": 16, 00:21:24.582 "task_count": 2048, 00:21:24.582 "sequence_count": 2048, 00:21:24.582 "buf_count": 2048 00:21:24.582 } 00:21:24.582 } 00:21:24.582 ] 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "subsystem": "bdev", 00:21:24.582 "config": [ 00:21:24.582 { 00:21:24.582 "method": "bdev_set_options", 00:21:24.582 "params": { 00:21:24.582 "bdev_io_pool_size": 65535, 00:21:24.582 "bdev_io_cache_size": 256, 00:21:24.582 "bdev_auto_examine": true, 00:21:24.582 "iobuf_small_cache_size": 128, 00:21:24.582 "iobuf_large_cache_size": 16 00:21:24.582 } 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "method": "bdev_raid_set_options", 00:21:24.582 "params": { 00:21:24.582 "process_window_size_kb": 1024 00:21:24.582 } 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "method": "bdev_iscsi_set_options", 00:21:24.582 "params": { 00:21:24.582 "timeout_sec": 30 00:21:24.582 } 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "method": "bdev_nvme_set_options", 00:21:24.582 "params": { 00:21:24.582 "action_on_timeout": "none", 00:21:24.582 "timeout_us": 0, 00:21:24.582 "timeout_admin_us": 0, 00:21:24.582 "keep_alive_timeout_ms": 10000, 00:21:24.582 "transport_retry_count": 4, 00:21:24.582 "arbitration_burst": 0, 00:21:24.582 "low_priority_weight": 0, 00:21:24.582 "medium_priority_weight": 0, 00:21:24.582 "high_priority_weight": 0, 00:21:24.582 "nvme_adminq_poll_period_us": 10000, 00:21:24.582 "nvme_ioq_poll_period_us": 0, 00:21:24.582 "io_queue_requests": 512, 00:21:24.582 "delay_cmd_submit": true, 00:21:24.582 "bdev_retry_count": 3, 00:21:24.582 "transport_ack_timeout": 0, 00:21:24.582 "ctrlr_loss_timeout_sec": 0, 00:21:24.582 "reconnect_delay_sec": 0, 00:21:24.582 "fast_io_fail_timeout_sec": 0, 00:21:24.582 "generate_uuids": false, 00:21:24.582 "transport_tos": 0, 00:21:24.582 "io_path_stat": false, 00:21:24.582 "allow_accel_sequence": false 00:21:24.582 } 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "method": "bdev_nvme_attach_controller", 00:21:24.582 "params": { 00:21:24.582 "name": "TLSTEST", 00:21:24.582 "trtype": "TCP", 00:21:24.582 "adrfam": "IPv4", 00:21:24.582 "traddr": "10.0.0.2", 00:21:24.582 "trsvcid": "4420", 00:21:24.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.582 "prchk_reftag": false, 00:21:24.582 "prchk_guard": false, 00:21:24.582 "ctrlr_loss_timeout_sec": 0, 00:21:24.582 "reconnect_delay_sec": 0, 00:21:24.582 "fast_io_fail_timeout_sec": 0, 00:21:24.582 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:24.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.582 "hdgst": false, 00:21:24.582 "ddgst": false 00:21:24.582 } 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "method": "bdev_nvme_set_hotplug", 00:21:24.582 "params": { 00:21:24.582 "period_us": 100000, 00:21:24.582 "enable": false 00:21:24.582 } 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "method": "bdev_wait_for_examine" 00:21:24.582 } 00:21:24.582 ] 00:21:24.582 }, 00:21:24.582 { 00:21:24.582 "subsystem": "nbd", 00:21:24.582 "config": [] 00:21:24.582 } 00:21:24.582 ] 00:21:24.582 }' 00:21:24.582 17:32:03 -- target/tls.sh@208 -- # killprocess 4158782 00:21:24.582 17:32:03 -- common/autotest_common.sh@926 -- # '[' -z 4158782 ']' 00:21:24.582 17:32:03 -- common/autotest_common.sh@930 -- # kill -0 4158782 00:21:24.582 17:32:03 -- common/autotest_common.sh@931 -- # uname 00:21:24.582 17:32:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:24.582 17:32:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4158782 00:21:24.582 17:32:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:24.582 17:32:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:24.582 17:32:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4158782' 00:21:24.582 killing process with pid 4158782 00:21:24.582 17:32:03 -- common/autotest_common.sh@945 -- # kill 4158782 00:21:24.582 Received shutdown signal, test time was about 10.000000 seconds 00:21:24.582 00:21:24.582 Latency(us) 00:21:24.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.582 =================================================================================================================== 00:21:24.582 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:24.582 17:32:03 -- common/autotest_common.sh@950 -- # wait 4158782 00:21:24.841 17:32:03 -- target/tls.sh@209 -- # killprocess 4158485 00:21:24.841 17:32:03 -- common/autotest_common.sh@926 -- # '[' -z 4158485 ']' 00:21:24.841 17:32:03 -- common/autotest_common.sh@930 -- # kill -0 4158485 00:21:24.841 17:32:03 -- common/autotest_common.sh@931 -- # uname 00:21:24.841 17:32:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:24.841 17:32:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4158485 00:21:24.841 17:32:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:24.841 17:32:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:24.841 17:32:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4158485' 00:21:24.841 killing process with pid 4158485 00:21:24.841 17:32:03 -- common/autotest_common.sh@945 -- # kill 4158485 00:21:24.841 17:32:03 -- common/autotest_common.sh@950 -- # wait 4158485 00:21:25.099 17:32:03 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:25.099 17:32:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:25.099 17:32:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:25.099 17:32:03 -- target/tls.sh@212 -- # echo '{ 00:21:25.099 "subsystems": [ 00:21:25.099 { 00:21:25.099 "subsystem": "iobuf", 00:21:25.099 "config": [ 00:21:25.099 { 00:21:25.099 "method": "iobuf_set_options", 00:21:25.099 "params": { 00:21:25.099 "small_pool_count": 8192, 00:21:25.099 "large_pool_count": 1024, 00:21:25.099 "small_bufsize": 8192, 00:21:25.099 "large_bufsize": 135168 00:21:25.099 } 00:21:25.099 } 00:21:25.099 ] 00:21:25.099 }, 00:21:25.099 { 00:21:25.099 "subsystem": "sock", 00:21:25.099 "config": [ 00:21:25.099 { 00:21:25.099 "method": "sock_impl_set_options", 00:21:25.099 "params": { 00:21:25.099 "impl_name": "posix", 00:21:25.099 "recv_buf_size": 2097152, 00:21:25.099 "send_buf_size": 2097152, 00:21:25.099 "enable_recv_pipe": true, 00:21:25.099 "enable_quickack": false, 00:21:25.099 "enable_placement_id": 0, 00:21:25.099 "enable_zerocopy_send_server": true, 00:21:25.099 "enable_zerocopy_send_client": false, 00:21:25.099 "zerocopy_threshold": 0, 00:21:25.099 "tls_version": 0, 00:21:25.099 "enable_ktls": false 00:21:25.099 } 00:21:25.099 }, 00:21:25.099 { 00:21:25.099 "method": "sock_impl_set_options", 00:21:25.099 "params": { 00:21:25.099 "impl_name": "ssl", 00:21:25.099 "recv_buf_size": 4096, 00:21:25.099 "send_buf_size": 4096, 00:21:25.099 "enable_recv_pipe": true, 00:21:25.099 "enable_quickack": false, 00:21:25.099 "enable_placement_id": 0, 00:21:25.099 "enable_zerocopy_send_server": true, 00:21:25.099 "enable_zerocopy_send_client": false, 00:21:25.099 "zerocopy_threshold": 0, 00:21:25.099 "tls_version": 0, 00:21:25.099 "enable_ktls": false 00:21:25.099 } 00:21:25.099 } 00:21:25.100 ] 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "subsystem": "vmd", 00:21:25.100 "config": [] 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "subsystem": "accel", 00:21:25.100 "config": [ 00:21:25.100 { 00:21:25.100 "method": "accel_set_options", 00:21:25.100 "params": { 00:21:25.100 "small_cache_size": 128, 00:21:25.100 "large_cache_size": 16, 00:21:25.100 "task_count": 2048, 00:21:25.100 "sequence_count": 2048, 00:21:25.100 "buf_count": 2048 00:21:25.100 } 00:21:25.100 } 00:21:25.100 ] 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "subsystem": "bdev", 00:21:25.100 "config": [ 00:21:25.100 { 00:21:25.100 "method": "bdev_set_options", 00:21:25.100 "params": { 00:21:25.100 "bdev_io_pool_size": 65535, 00:21:25.100 "bdev_io_cache_size": 256, 00:21:25.100 "bdev_auto_examine": true, 00:21:25.100 "iobuf_small_cache_size": 128, 00:21:25.100 "iobuf_large_cache_size": 16 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "bdev_raid_set_options", 00:21:25.100 "params": { 00:21:25.100 "process_window_size_kb": 1024 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "bdev_iscsi_set_options", 00:21:25.100 "params": { 00:21:25.100 "timeout_sec": 30 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "bdev_nvme_set_options", 00:21:25.100 "params": { 00:21:25.100 "action_on_timeout": "none", 00:21:25.100 "timeout_us": 0, 00:21:25.100 "timeout_admin_us": 0, 00:21:25.100 "keep_alive_timeout_ms": 10000, 00:21:25.100 "transport_retry_count": 4, 00:21:25.100 "arbitration_burst": 0, 00:21:25.100 "low_priority_weight": 0, 00:21:25.100 "medium_priority_weight": 0, 00:21:25.100 "high_priority_weight": 0, 00:21:25.100 "nvme_adminq_poll_period_us": 10000, 00:21:25.100 "nvme_ioq_poll_period_us": 0, 00:21:25.100 "io_queue_requests": 0, 00:21:25.100 "delay_cmd_submit": true, 00:21:25.100 "bdev_retry_count": 3, 00:21:25.100 "transport_ack_timeout": 0, 00:21:25.100 "ctrlr_loss_timeout_sec": 0, 00:21:25.100 "reconnect_delay_sec": 0, 00:21:25.100 "fast_io_fail_timeout_sec": 0, 00:21:25.100 "generate_uuids": false, 00:21:25.100 "transport_tos": 0, 00:21:25.100 "io_path_stat": false, 00:21:25.100 "allow_accel_sequence": false 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "bdev_nvme_set_hotplug", 00:21:25.100 "params": { 00:21:25.100 "period_us": 100000, 00:21:25.100 "enable": false 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "bdev_malloc_create", 00:21:25.100 "params": { 00:21:25.100 "name": "malloc0", 00:21:25.100 "num_blocks": 8192, 00:21:25.100 "block_size": 4096, 00:21:25.100 "physical_block_size": 4096, 00:21:25.100 "uuid": "e31bccd4-bf8a-447c-9264-8fde6e14ca4c", 00:21:25.100 "optimal_io_boundary": 0 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "bdev_wait_for_examine" 00:21:25.100 } 00:21:25.100 ] 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "subsystem": "nbd", 00:21:25.100 "config": [] 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "subsystem": "scheduler", 00:21:25.100 "config": [ 00:21:25.100 { 00:21:25.100 "method": "framework_set_scheduler", 00:21:25.100 "params": { 00:21:25.100 "name": "static" 00:21:25.100 } 00:21:25.100 } 00:21:25.100 ] 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "subsystem": "nvmf", 00:21:25.100 "config": [ 00:21:25.100 { 00:21:25.100 "method": "nvmf_set_config", 00:21:25.100 "params": { 00:21:25.100 "discovery_filter": "match_any", 00:21:25.100 "admin_cmd_passthru": { 00:21:25.100 "identify_ctrlr": false 00:21:25.100 } 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "nvmf_set_max_subsystems", 00:21:25.100 "params": { 00:21:25.100 "max_subsystems": 1024 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "nvmf_set_crdt", 00:21:25.100 "params": { 00:21:25.100 "crdt1": 0, 00:21:25.100 "crdt2": 0, 00:21:25.100 "crdt3": 0 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "nvmf_create_transport", 00:21:25.100 "params": { 00:21:25.100 "trtype": "TCP", 00:21:25.100 "max_queue_depth": 128, 00:21:25.100 "max_io_qpairs_per_ctrlr": 127, 00:21:25.100 "in_capsule_data_size": 4096, 00:21:25.100 "max_io_size": 131072, 00:21:25.100 "io_unit_size": 131072, 00:21:25.100 "max_aq_depth": 128, 00:21:25.100 "num_shared_buffers": 511, 00:21:25.100 "buf_cache_size": 4294967295, 00:21:25.100 "dif_insert_or_strip": false, 00:21:25.100 "zcopy": false, 00:21:25.100 "c2h_success": false, 00:21:25.100 "sock_priority": 0, 00:21:25.100 "abort_timeout_sec": 1 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "nvmf_create_subsystem", 00:21:25.100 "params": { 00:21:25.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.100 "allow_any_host": false, 00:21:25.100 "serial_number": "SPDK00000000000001", 00:21:25.100 "model_number": "SPDK bdev Controller", 00:21:25.100 "max_namespaces": 10, 00:21:25.100 "min_cntlid": 1, 00:21:25.100 "max_cntlid": 65519, 00:21:25.100 "ana_reporting": false 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "nvmf_subsystem_add_host", 00:21:25.100 "params": { 00:21:25.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.100 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.100 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "nvmf_subsystem_add_ns", 00:21:25.100 "params": { 00:21:25.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.100 "namespace": { 00:21:25.100 "nsid": 1, 00:21:25.100 "bdev_name": "malloc0", 00:21:25.100 "nguid": "E31BCCD4BF8A447C92648FDE6E14CA4C", 00:21:25.100 "uuid": "e31bccd4-bf8a-447c-9264-8fde6e14ca4c" 00:21:25.100 } 00:21:25.100 } 00:21:25.100 }, 00:21:25.100 { 00:21:25.100 "method": "nvmf_subsystem_add_listener", 00:21:25.100 "params": { 00:21:25.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.100 "listen_address": { 00:21:25.100 "trtype": "TCP", 00:21:25.100 "adrfam": "IPv4", 00:21:25.100 "traddr": "10.0.0.2", 00:21:25.100 "trsvcid": "4420" 00:21:25.100 }, 00:21:25.100 "secure_channel": true 00:21:25.100 } 00:21:25.100 } 00:21:25.100 ] 00:21:25.100 } 00:21:25.100 ] 00:21:25.100 }' 00:21:25.100 17:32:03 -- common/autotest_common.sh@10 -- # set +x 00:21:25.100 17:32:03 -- nvmf/common.sh@469 -- # nvmfpid=4159322 00:21:25.100 17:32:03 -- nvmf/common.sh@470 -- # waitforlisten 4159322 00:21:25.100 17:32:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:25.100 17:32:03 -- common/autotest_common.sh@819 -- # '[' -z 4159322 ']' 00:21:25.100 17:32:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.100 17:32:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:25.100 17:32:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.100 17:32:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:25.100 17:32:03 -- common/autotest_common.sh@10 -- # set +x 00:21:25.100 [2024-07-12 17:32:03.879249] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:25.100 [2024-07-12 17:32:03.879310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.100 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.100 [2024-07-12 17:32:03.954762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.100 [2024-07-12 17:32:03.995854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:25.100 [2024-07-12 17:32:03.995998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.100 [2024-07-12 17:32:03.996009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.100 [2024-07-12 17:32:03.996019] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.100 [2024-07-12 17:32:03.996039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.379 [2024-07-12 17:32:04.191030] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.379 [2024-07-12 17:32:04.223051] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.379 [2024-07-12 17:32:04.223248] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.004 17:32:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:26.004 17:32:04 -- common/autotest_common.sh@852 -- # return 0 00:21:26.004 17:32:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:26.004 17:32:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:26.004 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:21:26.004 17:32:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.004 17:32:04 -- target/tls.sh@216 -- # bdevperf_pid=4159443 00:21:26.004 17:32:04 -- target/tls.sh@217 -- # waitforlisten 4159443 /var/tmp/bdevperf.sock 00:21:26.004 17:32:04 -- common/autotest_common.sh@819 -- # '[' -z 4159443 ']' 00:21:26.004 17:32:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.004 17:32:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:26.004 17:32:04 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:26.004 17:32:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.004 17:32:04 -- target/tls.sh@213 -- # echo '{ 00:21:26.004 "subsystems": [ 00:21:26.004 { 00:21:26.004 "subsystem": "iobuf", 00:21:26.004 "config": [ 00:21:26.004 { 00:21:26.004 "method": "iobuf_set_options", 00:21:26.004 "params": { 00:21:26.004 "small_pool_count": 8192, 00:21:26.004 "large_pool_count": 1024, 00:21:26.004 "small_bufsize": 8192, 00:21:26.004 "large_bufsize": 135168 00:21:26.004 } 00:21:26.004 } 00:21:26.004 ] 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "subsystem": "sock", 00:21:26.004 "config": [ 00:21:26.004 { 00:21:26.004 "method": "sock_impl_set_options", 00:21:26.004 "params": { 00:21:26.004 "impl_name": "posix", 00:21:26.004 "recv_buf_size": 2097152, 00:21:26.004 "send_buf_size": 2097152, 00:21:26.004 "enable_recv_pipe": true, 00:21:26.004 "enable_quickack": false, 00:21:26.004 "enable_placement_id": 0, 00:21:26.004 "enable_zerocopy_send_server": true, 00:21:26.004 "enable_zerocopy_send_client": false, 00:21:26.004 "zerocopy_threshold": 0, 00:21:26.004 "tls_version": 0, 00:21:26.004 "enable_ktls": false 00:21:26.004 } 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "method": "sock_impl_set_options", 00:21:26.004 "params": { 00:21:26.004 "impl_name": "ssl", 00:21:26.004 "recv_buf_size": 4096, 00:21:26.004 "send_buf_size": 4096, 00:21:26.004 "enable_recv_pipe": true, 00:21:26.004 "enable_quickack": false, 00:21:26.004 "enable_placement_id": 0, 00:21:26.004 "enable_zerocopy_send_server": true, 00:21:26.004 "enable_zerocopy_send_client": false, 00:21:26.004 "zerocopy_threshold": 0, 00:21:26.004 "tls_version": 0, 00:21:26.004 "enable_ktls": false 00:21:26.004 } 00:21:26.004 } 00:21:26.004 ] 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "subsystem": "vmd", 00:21:26.004 "config": [] 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "subsystem": "accel", 00:21:26.004 "config": [ 00:21:26.004 { 00:21:26.004 "method": "accel_set_options", 00:21:26.004 "params": { 00:21:26.004 "small_cache_size": 128, 00:21:26.004 "large_cache_size": 16, 00:21:26.004 "task_count": 2048, 00:21:26.004 "sequence_count": 2048, 00:21:26.004 "buf_count": 2048 00:21:26.004 } 00:21:26.004 } 00:21:26.004 ] 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "subsystem": "bdev", 00:21:26.004 "config": [ 00:21:26.004 { 00:21:26.004 "method": "bdev_set_options", 00:21:26.004 "params": { 00:21:26.004 "bdev_io_pool_size": 65535, 00:21:26.004 "bdev_io_cache_size": 256, 00:21:26.004 "bdev_auto_examine": true, 00:21:26.004 "iobuf_small_cache_size": 128, 00:21:26.004 "iobuf_large_cache_size": 16 00:21:26.004 } 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "method": "bdev_raid_set_options", 00:21:26.004 "params": { 00:21:26.004 "process_window_size_kb": 1024 00:21:26.004 } 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "method": "bdev_iscsi_set_options", 00:21:26.004 "params": { 00:21:26.004 "timeout_sec": 30 00:21:26.004 } 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "method": "bdev_nvme_set_options", 00:21:26.004 "params": { 00:21:26.004 "action_on_timeout": "none", 00:21:26.004 "timeout_us": 0, 00:21:26.004 "timeout_admin_us": 0, 00:21:26.004 "keep_alive_timeout_ms": 10000, 00:21:26.004 "transport_retry_count": 4, 00:21:26.004 "arbitration_burst": 0, 00:21:26.004 "low_priority_weight": 0, 00:21:26.004 "medium_priority_weight": 0, 00:21:26.004 "high_priority_weight": 0, 00:21:26.004 "nvme_adminq_poll_period_us": 10000, 00:21:26.004 "nvme_ioq_poll_period_us": 0, 00:21:26.004 "io_queue_requests": 512, 00:21:26.004 "delay_cmd_submit": true, 00:21:26.004 "bdev_retry_count": 3, 00:21:26.004 "transport_ack_timeout": 0, 00:21:26.004 "ctrlr_loss_timeout_sec": 0, 00:21:26.004 "reconnect_delay_sec": 0, 00:21:26.004 "fast_io_fail_timeout_sec": 0, 00:21:26.004 "generate_uuids": false, 00:21:26.004 "transport_tos": 0, 00:21:26.004 "io_path_stat": false, 00:21:26.004 "allow_accel_sequence": false 00:21:26.004 } 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "method": "bdev_nvme_attach_controller", 00:21:26.004 "params": { 00:21:26.004 "name": "TLSTEST", 00:21:26.004 "trtype": "TCP", 00:21:26.004 "adrfam": "IPv4", 00:21:26.004 "traddr": "10.0.0.2", 00:21:26.004 "trsvcid": "4420", 00:21:26.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.004 "prchk_reftag": false, 00:21:26.004 "prchk_guard": false, 00:21:26.004 "ctrlr_loss_timeout_sec": 0, 00:21:26.004 "reconnect_delay_sec": 0, 00:21:26.004 "fast_io_fail_timeout_sec": 0, 00:21:26.004 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:26.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.004 "hdgst": false, 00:21:26.004 "ddgst": false 00:21:26.004 } 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "method": "bdev_nvme_set_hotplug", 00:21:26.004 "params": { 00:21:26.004 "period_us": 100000, 00:21:26.004 "enable": false 00:21:26.004 } 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "method": "bdev_wait_for_examine" 00:21:26.004 } 00:21:26.004 ] 00:21:26.004 }, 00:21:26.004 { 00:21:26.004 "subsystem": "nbd", 00:21:26.004 "config": [] 00:21:26.004 } 00:21:26.004 ] 00:21:26.004 }' 00:21:26.004 17:32:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:26.004 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:21:26.004 [2024-07-12 17:32:04.882349] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:26.004 [2024-07-12 17:32:04.882407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159443 ] 00:21:26.004 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.004 [2024-07-12 17:32:04.941338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.262 [2024-07-12 17:32:04.978828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.262 [2024-07-12 17:32:05.106876] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.197 17:32:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:27.197 17:32:05 -- common/autotest_common.sh@852 -- # return 0 00:21:27.197 17:32:05 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:27.197 Running I/O for 10 seconds... 00:21:37.170 00:21:37.170 Latency(us) 00:21:37.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.170 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:37.170 Verification LBA range: start 0x0 length 0x2000 00:21:37.170 TLSTESTn1 : 10.02 4766.27 18.62 0.00 0.00 26829.02 5362.04 63391.19 00:21:37.170 =================================================================================================================== 00:21:37.170 Total : 4766.27 18.62 0.00 0.00 26829.02 5362.04 63391.19 00:21:37.170 0 00:21:37.170 17:32:16 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.170 17:32:16 -- target/tls.sh@223 -- # killprocess 4159443 00:21:37.170 17:32:16 -- common/autotest_common.sh@926 -- # '[' -z 4159443 ']' 00:21:37.170 17:32:16 -- common/autotest_common.sh@930 -- # kill -0 4159443 00:21:37.170 17:32:16 -- common/autotest_common.sh@931 -- # uname 00:21:37.170 17:32:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.170 17:32:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4159443 00:21:37.170 17:32:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:37.170 17:32:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:37.170 17:32:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4159443' 00:21:37.170 killing process with pid 4159443 00:21:37.170 17:32:16 -- common/autotest_common.sh@945 -- # kill 4159443 00:21:37.170 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.170 00:21:37.170 Latency(us) 00:21:37.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.170 =================================================================================================================== 00:21:37.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.170 17:32:16 -- common/autotest_common.sh@950 -- # wait 4159443 00:21:37.430 17:32:16 -- target/tls.sh@224 -- # killprocess 4159322 00:21:37.430 17:32:16 -- common/autotest_common.sh@926 -- # '[' -z 4159322 ']' 00:21:37.430 17:32:16 -- common/autotest_common.sh@930 -- # kill -0 4159322 00:21:37.430 17:32:16 -- common/autotest_common.sh@931 -- # uname 00:21:37.430 17:32:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.430 17:32:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4159322 00:21:37.430 17:32:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:37.430 17:32:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:37.430 17:32:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4159322' 00:21:37.430 killing process with pid 4159322 00:21:37.430 17:32:16 -- common/autotest_common.sh@945 -- # kill 4159322 00:21:37.430 17:32:16 -- common/autotest_common.sh@950 -- # wait 4159322 00:21:37.689 17:32:16 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:21:37.689 17:32:16 -- target/tls.sh@227 -- # cleanup 00:21:37.689 17:32:16 -- target/tls.sh@15 -- # process_shm --id 0 00:21:37.689 17:32:16 -- common/autotest_common.sh@796 -- # type=--id 00:21:37.689 17:32:16 -- common/autotest_common.sh@797 -- # id=0 00:21:37.689 17:32:16 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:37.689 17:32:16 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:37.689 17:32:16 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:37.689 17:32:16 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:37.689 17:32:16 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:37.689 17:32:16 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:37.689 nvmf_trace.0 00:21:37.689 17:32:16 -- common/autotest_common.sh@811 -- # return 0 00:21:37.689 17:32:16 -- target/tls.sh@16 -- # killprocess 4159443 00:21:37.689 17:32:16 -- common/autotest_common.sh@926 -- # '[' -z 4159443 ']' 00:21:37.689 17:32:16 -- common/autotest_common.sh@930 -- # kill -0 4159443 00:21:37.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4159443) - No such process 00:21:37.689 17:32:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4159443 is not found' 00:21:37.689 Process with pid 4159443 is not found 00:21:37.689 17:32:16 -- target/tls.sh@17 -- # nvmftestfini 00:21:37.689 17:32:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:37.689 17:32:16 -- nvmf/common.sh@116 -- # sync 00:21:37.689 17:32:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:37.689 17:32:16 -- nvmf/common.sh@119 -- # set +e 00:21:37.689 17:32:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:37.689 17:32:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:37.689 rmmod nvme_tcp 00:21:37.689 rmmod nvme_fabrics 00:21:37.689 rmmod nvme_keyring 00:21:37.689 17:32:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:37.689 17:32:16 -- nvmf/common.sh@123 -- # set -e 00:21:37.689 17:32:16 -- nvmf/common.sh@124 -- # return 0 00:21:37.689 17:32:16 -- nvmf/common.sh@477 -- # '[' -n 4159322 ']' 00:21:37.689 17:32:16 -- nvmf/common.sh@478 -- # killprocess 4159322 00:21:37.689 17:32:16 -- common/autotest_common.sh@926 -- # '[' -z 4159322 ']' 00:21:37.689 17:32:16 -- common/autotest_common.sh@930 -- # kill -0 4159322 00:21:37.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4159322) - No such process 00:21:37.689 17:32:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4159322 is not found' 00:21:37.689 Process with pid 4159322 is not found 00:21:37.689 17:32:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.689 17:32:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:37.689 17:32:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:37.689 17:32:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.689 17:32:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:37.689 17:32:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.689 17:32:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.689 17:32:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.226 17:32:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:40.226 17:32:18 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:40.226 00:21:40.226 real 1m11.453s 00:21:40.226 user 1m47.344s 00:21:40.226 sys 0m26.596s 00:21:40.226 17:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.226 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:21:40.226 ************************************ 00:21:40.226 END TEST nvmf_tls 00:21:40.226 ************************************ 00:21:40.226 17:32:18 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:40.226 17:32:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:40.226 17:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:40.226 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:21:40.226 ************************************ 00:21:40.226 START TEST nvmf_fips 00:21:40.226 ************************************ 00:21:40.226 17:32:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:40.226 * Looking for test storage... 00:21:40.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:40.226 17:32:18 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.226 17:32:18 -- nvmf/common.sh@7 -- # uname -s 00:21:40.226 17:32:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.226 17:32:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.226 17:32:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.226 17:32:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.226 17:32:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.226 17:32:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.226 17:32:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.226 17:32:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.226 17:32:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.226 17:32:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.226 17:32:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:40.226 17:32:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:40.226 17:32:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.226 17:32:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.226 17:32:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.226 17:32:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.226 17:32:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.226 17:32:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.226 17:32:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.226 17:32:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.226 17:32:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.226 17:32:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.226 17:32:18 -- paths/export.sh@5 -- # export PATH 00:21:40.226 17:32:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.226 17:32:18 -- nvmf/common.sh@46 -- # : 0 00:21:40.226 17:32:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:40.226 17:32:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:40.226 17:32:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:40.226 17:32:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.226 17:32:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.226 17:32:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:40.226 17:32:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:40.226 17:32:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:40.226 17:32:18 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:40.226 17:32:18 -- fips/fips.sh@89 -- # check_openssl_version 00:21:40.226 17:32:18 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:40.226 17:32:18 -- fips/fips.sh@85 -- # openssl version 00:21:40.226 17:32:18 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:40.226 17:32:18 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:40.226 17:32:18 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:40.227 17:32:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:40.227 17:32:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:40.227 17:32:18 -- scripts/common.sh@335 -- # IFS=.-: 00:21:40.227 17:32:18 -- scripts/common.sh@335 -- # read -ra ver1 00:21:40.227 17:32:18 -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.227 17:32:18 -- scripts/common.sh@336 -- # read -ra ver2 00:21:40.227 17:32:18 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:40.227 17:32:18 -- scripts/common.sh@339 -- # ver1_l=3 00:21:40.227 17:32:18 -- scripts/common.sh@340 -- # ver2_l=3 00:21:40.227 17:32:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:40.227 17:32:18 -- scripts/common.sh@343 -- # case "$op" in 00:21:40.227 17:32:18 -- scripts/common.sh@347 -- # : 1 00:21:40.227 17:32:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:40.227 17:32:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.227 17:32:18 -- scripts/common.sh@364 -- # decimal 3 00:21:40.227 17:32:18 -- scripts/common.sh@352 -- # local d=3 00:21:40.227 17:32:18 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:40.227 17:32:18 -- scripts/common.sh@354 -- # echo 3 00:21:40.227 17:32:18 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:40.227 17:32:18 -- scripts/common.sh@365 -- # decimal 3 00:21:40.227 17:32:18 -- scripts/common.sh@352 -- # local d=3 00:21:40.227 17:32:18 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:40.227 17:32:18 -- scripts/common.sh@354 -- # echo 3 00:21:40.227 17:32:18 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:40.227 17:32:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:40.227 17:32:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:40.227 17:32:18 -- scripts/common.sh@363 -- # (( v++ )) 00:21:40.227 17:32:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.227 17:32:18 -- scripts/common.sh@364 -- # decimal 0 00:21:40.227 17:32:18 -- scripts/common.sh@352 -- # local d=0 00:21:40.227 17:32:18 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:40.227 17:32:18 -- scripts/common.sh@354 -- # echo 0 00:21:40.227 17:32:18 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:40.227 17:32:18 -- scripts/common.sh@365 -- # decimal 0 00:21:40.227 17:32:18 -- scripts/common.sh@352 -- # local d=0 00:21:40.227 17:32:18 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:40.227 17:32:18 -- scripts/common.sh@354 -- # echo 0 00:21:40.227 17:32:18 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:40.227 17:32:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:40.227 17:32:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:40.227 17:32:18 -- scripts/common.sh@363 -- # (( v++ )) 00:21:40.227 17:32:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.227 17:32:18 -- scripts/common.sh@364 -- # decimal 9 00:21:40.227 17:32:18 -- scripts/common.sh@352 -- # local d=9 00:21:40.227 17:32:18 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:40.227 17:32:18 -- scripts/common.sh@354 -- # echo 9 00:21:40.227 17:32:18 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:40.227 17:32:18 -- scripts/common.sh@365 -- # decimal 0 00:21:40.227 17:32:18 -- scripts/common.sh@352 -- # local d=0 00:21:40.227 17:32:18 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:40.227 17:32:18 -- scripts/common.sh@354 -- # echo 0 00:21:40.227 17:32:18 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:40.227 17:32:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:40.227 17:32:18 -- scripts/common.sh@366 -- # return 0 00:21:40.227 17:32:18 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:40.227 17:32:18 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:40.227 17:32:18 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:40.227 17:32:18 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:40.227 17:32:18 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:40.227 17:32:18 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:40.227 17:32:18 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:40.227 17:32:18 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:40.227 17:32:18 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:40.227 17:32:18 -- fips/fips.sh@114 -- # build_openssl_config 00:21:40.227 17:32:18 -- fips/fips.sh@37 -- # cat 00:21:40.227 17:32:18 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:40.227 17:32:18 -- fips/fips.sh@58 -- # cat - 00:21:40.227 17:32:18 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:40.227 17:32:18 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:40.227 17:32:18 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:40.227 17:32:18 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:40.227 17:32:18 -- fips/fips.sh@117 -- # openssl list -providers 00:21:40.227 17:32:18 -- fips/fips.sh@117 -- # grep name 00:21:40.227 17:32:18 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:40.227 17:32:18 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:40.227 17:32:18 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:40.227 17:32:18 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:40.227 17:32:18 -- common/autotest_common.sh@640 -- # local es=0 00:21:40.227 17:32:18 -- fips/fips.sh@128 -- # : 00:21:40.227 17:32:18 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:40.227 17:32:18 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:40.227 17:32:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.227 17:32:18 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:40.227 17:32:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.227 17:32:18 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:40.227 17:32:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.227 17:32:18 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:40.227 17:32:18 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:40.227 17:32:18 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:40.227 Error setting digest 00:21:40.227 0072B4834A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:40.227 0072B4834A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:40.227 17:32:18 -- common/autotest_common.sh@643 -- # es=1 00:21:40.227 17:32:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:40.227 17:32:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:40.227 17:32:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:40.227 17:32:19 -- fips/fips.sh@131 -- # nvmftestinit 00:21:40.227 17:32:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:40.227 17:32:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.227 17:32:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:40.227 17:32:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:40.227 17:32:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:40.227 17:32:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.227 17:32:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.227 17:32:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.227 17:32:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:40.227 17:32:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:40.227 17:32:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:40.227 17:32:19 -- common/autotest_common.sh@10 -- # set +x 00:21:45.504 17:32:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:45.504 17:32:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:45.504 17:32:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:45.504 17:32:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:45.504 17:32:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:45.504 17:32:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:45.504 17:32:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:45.504 17:32:24 -- nvmf/common.sh@294 -- # net_devs=() 00:21:45.504 17:32:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:45.504 17:32:24 -- nvmf/common.sh@295 -- # e810=() 00:21:45.504 17:32:24 -- nvmf/common.sh@295 -- # local -ga e810 00:21:45.504 17:32:24 -- nvmf/common.sh@296 -- # x722=() 00:21:45.504 17:32:24 -- nvmf/common.sh@296 -- # local -ga x722 00:21:45.504 17:32:24 -- nvmf/common.sh@297 -- # mlx=() 00:21:45.504 17:32:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:45.504 17:32:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.504 17:32:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:45.504 17:32:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:45.504 17:32:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:45.504 17:32:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:45.504 17:32:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:45.504 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:45.504 17:32:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:45.504 17:32:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:45.504 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:45.504 17:32:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:45.504 17:32:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:45.504 17:32:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:45.504 17:32:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.504 17:32:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:45.504 17:32:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.504 17:32:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:45.504 Found net devices under 0000:af:00.0: cvl_0_0 00:21:45.504 17:32:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.504 17:32:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:45.504 17:32:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.504 17:32:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:45.504 17:32:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.504 17:32:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:45.504 Found net devices under 0000:af:00.1: cvl_0_1 00:21:45.505 17:32:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.505 17:32:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:45.505 17:32:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:45.505 17:32:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:45.505 17:32:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:45.505 17:32:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:45.505 17:32:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.505 17:32:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.505 17:32:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.505 17:32:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:45.505 17:32:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.505 17:32:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.505 17:32:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:45.505 17:32:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.505 17:32:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.505 17:32:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:45.505 17:32:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:45.505 17:32:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.505 17:32:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.505 17:32:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.505 17:32:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.505 17:32:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:45.505 17:32:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.505 17:32:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.505 17:32:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.505 17:32:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:45.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:21:45.505 00:21:45.505 --- 10.0.0.2 ping statistics --- 00:21:45.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.505 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:45.505 17:32:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:21:45.505 00:21:45.505 --- 10.0.0.1 ping statistics --- 00:21:45.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.505 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:21:45.505 17:32:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.505 17:32:24 -- nvmf/common.sh@410 -- # return 0 00:21:45.505 17:32:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:45.505 17:32:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.505 17:32:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:45.505 17:32:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:45.505 17:32:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.505 17:32:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:45.505 17:32:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:45.505 17:32:24 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:45.505 17:32:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:45.505 17:32:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:45.505 17:32:24 -- common/autotest_common.sh@10 -- # set +x 00:21:45.505 17:32:24 -- nvmf/common.sh@469 -- # nvmfpid=4165154 00:21:45.505 17:32:24 -- nvmf/common.sh@470 -- # waitforlisten 4165154 00:21:45.505 17:32:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.505 17:32:24 -- common/autotest_common.sh@819 -- # '[' -z 4165154 ']' 00:21:45.505 17:32:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.505 17:32:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:45.505 17:32:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.505 17:32:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:45.505 17:32:24 -- common/autotest_common.sh@10 -- # set +x 00:21:45.764 [2024-07-12 17:32:24.545514] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:45.764 [2024-07-12 17:32:24.545575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.764 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.764 [2024-07-12 17:32:24.623094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.764 [2024-07-12 17:32:24.663959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:45.764 [2024-07-12 17:32:24.664102] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.764 [2024-07-12 17:32:24.664112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.764 [2024-07-12 17:32:24.664122] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.764 [2024-07-12 17:32:24.664150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.702 17:32:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:46.702 17:32:25 -- common/autotest_common.sh@852 -- # return 0 00:21:46.702 17:32:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:46.702 17:32:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:46.702 17:32:25 -- common/autotest_common.sh@10 -- # set +x 00:21:46.702 17:32:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.702 17:32:25 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:46.702 17:32:25 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:46.702 17:32:25 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:46.702 17:32:25 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:46.702 17:32:25 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:46.702 17:32:25 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:46.702 17:32:25 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:46.702 17:32:25 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:46.961 [2024-07-12 17:32:25.699456] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.961 [2024-07-12 17:32:25.715460] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:46.961 [2024-07-12 17:32:25.715658] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.961 malloc0 00:21:46.961 17:32:25 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.961 17:32:25 -- fips/fips.sh@148 -- # bdevperf_pid=4165336 00:21:46.961 17:32:25 -- fips/fips.sh@149 -- # waitforlisten 4165336 /var/tmp/bdevperf.sock 00:21:46.961 17:32:25 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.961 17:32:25 -- common/autotest_common.sh@819 -- # '[' -z 4165336 ']' 00:21:46.961 17:32:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.961 17:32:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.961 17:32:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.961 17:32:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.961 17:32:25 -- common/autotest_common.sh@10 -- # set +x 00:21:46.961 [2024-07-12 17:32:25.845641] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:46.961 [2024-07-12 17:32:25.845702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165336 ] 00:21:46.961 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.961 [2024-07-12 17:32:25.904053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.238 [2024-07-12 17:32:25.940234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.807 17:32:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.807 17:32:26 -- common/autotest_common.sh@852 -- # return 0 00:21:47.807 17:32:26 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:48.066 [2024-07-12 17:32:26.901502] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.066 TLSTESTn1 00:21:48.066 17:32:26 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:48.324 Running I/O for 10 seconds... 00:21:58.298 00:21:58.298 Latency(us) 00:21:58.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.298 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.298 Verification LBA range: start 0x0 length 0x2000 00:21:58.298 TLSTESTn1 : 10.04 5515.47 21.54 0.00 0.00 23170.18 3500.22 49569.05 00:21:58.298 =================================================================================================================== 00:21:58.298 Total : 5515.47 21.54 0.00 0.00 23170.18 3500.22 49569.05 00:21:58.298 0 00:21:58.298 17:32:37 -- fips/fips.sh@1 -- # cleanup 00:21:58.298 17:32:37 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:58.298 17:32:37 -- common/autotest_common.sh@796 -- # type=--id 00:21:58.298 17:32:37 -- common/autotest_common.sh@797 -- # id=0 00:21:58.298 17:32:37 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:58.298 17:32:37 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:58.298 17:32:37 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:58.298 17:32:37 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:58.298 17:32:37 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:58.298 17:32:37 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:58.298 nvmf_trace.0 00:21:58.298 17:32:37 -- common/autotest_common.sh@811 -- # return 0 00:21:58.298 17:32:37 -- fips/fips.sh@16 -- # killprocess 4165336 00:21:58.298 17:32:37 -- common/autotest_common.sh@926 -- # '[' -z 4165336 ']' 00:21:58.298 17:32:37 -- common/autotest_common.sh@930 -- # kill -0 4165336 00:21:58.298 17:32:37 -- common/autotest_common.sh@931 -- # uname 00:21:58.298 17:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:58.298 17:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4165336 00:21:58.556 17:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:58.556 17:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:58.556 17:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4165336' 00:21:58.556 killing process with pid 4165336 00:21:58.556 17:32:37 -- common/autotest_common.sh@945 -- # kill 4165336 00:21:58.556 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.556 00:21:58.556 Latency(us) 00:21:58.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.556 =================================================================================================================== 00:21:58.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:58.556 17:32:37 -- common/autotest_common.sh@950 -- # wait 4165336 00:21:58.556 17:32:37 -- fips/fips.sh@17 -- # nvmftestfini 00:21:58.556 17:32:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:58.556 17:32:37 -- nvmf/common.sh@116 -- # sync 00:21:58.556 17:32:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:58.556 17:32:37 -- nvmf/common.sh@119 -- # set +e 00:21:58.556 17:32:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:58.556 17:32:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:58.556 rmmod nvme_tcp 00:21:58.556 rmmod nvme_fabrics 00:21:58.556 rmmod nvme_keyring 00:21:58.556 17:32:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:58.814 17:32:37 -- nvmf/common.sh@123 -- # set -e 00:21:58.814 17:32:37 -- nvmf/common.sh@124 -- # return 0 00:21:58.814 17:32:37 -- nvmf/common.sh@477 -- # '[' -n 4165154 ']' 00:21:58.814 17:32:37 -- nvmf/common.sh@478 -- # killprocess 4165154 00:21:58.814 17:32:37 -- common/autotest_common.sh@926 -- # '[' -z 4165154 ']' 00:21:58.814 17:32:37 -- common/autotest_common.sh@930 -- # kill -0 4165154 00:21:58.814 17:32:37 -- common/autotest_common.sh@931 -- # uname 00:21:58.814 17:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:58.814 17:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4165154 00:21:58.814 17:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:58.814 17:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:58.814 17:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4165154' 00:21:58.814 killing process with pid 4165154 00:21:58.814 17:32:37 -- common/autotest_common.sh@945 -- # kill 4165154 00:21:58.814 17:32:37 -- common/autotest_common.sh@950 -- # wait 4165154 00:21:58.814 17:32:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:58.814 17:32:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:58.814 17:32:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:58.814 17:32:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.814 17:32:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:58.814 17:32:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.814 17:32:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.814 17:32:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.348 17:32:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:01.348 17:32:39 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:01.348 00:22:01.348 real 0m21.136s 00:22:01.348 user 0m23.325s 00:22:01.348 sys 0m9.073s 00:22:01.348 17:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.348 17:32:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.348 ************************************ 00:22:01.348 END TEST nvmf_fips 00:22:01.348 ************************************ 00:22:01.348 17:32:39 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:22:01.348 17:32:39 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:01.348 17:32:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:01.348 17:32:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:01.348 17:32:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.348 ************************************ 00:22:01.348 START TEST nvmf_fuzz 00:22:01.348 ************************************ 00:22:01.348 17:32:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:01.348 * Looking for test storage... 00:22:01.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.348 17:32:39 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.348 17:32:39 -- nvmf/common.sh@7 -- # uname -s 00:22:01.348 17:32:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.348 17:32:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.348 17:32:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.348 17:32:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.348 17:32:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.348 17:32:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.348 17:32:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.348 17:32:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.348 17:32:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.348 17:32:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.348 17:32:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:01.348 17:32:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:01.348 17:32:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.349 17:32:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.349 17:32:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.349 17:32:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.349 17:32:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.349 17:32:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.349 17:32:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.349 17:32:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.349 17:32:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.349 17:32:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.349 17:32:39 -- paths/export.sh@5 -- # export PATH 00:22:01.349 17:32:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.349 17:32:39 -- nvmf/common.sh@46 -- # : 0 00:22:01.349 17:32:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:01.349 17:32:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:01.349 17:32:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:01.349 17:32:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.349 17:32:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.349 17:32:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:01.349 17:32:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:01.349 17:32:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:01.349 17:32:40 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:01.349 17:32:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:01.349 17:32:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.349 17:32:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:01.349 17:32:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:01.349 17:32:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:01.349 17:32:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.349 17:32:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.349 17:32:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.349 17:32:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:01.349 17:32:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:01.349 17:32:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:01.349 17:32:40 -- common/autotest_common.sh@10 -- # set +x 00:22:06.621 17:32:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:06.621 17:32:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:06.621 17:32:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:06.621 17:32:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:06.621 17:32:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:06.621 17:32:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:06.621 17:32:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:06.621 17:32:45 -- nvmf/common.sh@294 -- # net_devs=() 00:22:06.621 17:32:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:06.621 17:32:45 -- nvmf/common.sh@295 -- # e810=() 00:22:06.621 17:32:45 -- nvmf/common.sh@295 -- # local -ga e810 00:22:06.621 17:32:45 -- nvmf/common.sh@296 -- # x722=() 00:22:06.621 17:32:45 -- nvmf/common.sh@296 -- # local -ga x722 00:22:06.621 17:32:45 -- nvmf/common.sh@297 -- # mlx=() 00:22:06.621 17:32:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:06.621 17:32:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.621 17:32:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:06.621 17:32:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:06.621 17:32:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:06.621 17:32:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:06.621 17:32:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:06.621 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:06.621 17:32:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:06.621 17:32:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:06.621 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:06.621 17:32:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:06.621 17:32:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:06.621 17:32:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:06.622 17:32:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.622 17:32:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:06.622 17:32:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.622 17:32:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:06.622 Found net devices under 0000:af:00.0: cvl_0_0 00:22:06.622 17:32:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.622 17:32:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:06.622 17:32:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.622 17:32:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:06.622 17:32:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.622 17:32:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:06.622 Found net devices under 0000:af:00.1: cvl_0_1 00:22:06.622 17:32:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.622 17:32:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:06.622 17:32:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:06.622 17:32:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:06.622 17:32:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:06.622 17:32:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:06.622 17:32:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.622 17:32:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.622 17:32:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.622 17:32:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:06.622 17:32:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.622 17:32:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.622 17:32:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:06.622 17:32:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.622 17:32:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.622 17:32:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:06.622 17:32:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:06.622 17:32:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.622 17:32:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.622 17:32:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.622 17:32:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.622 17:32:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:06.622 17:32:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.622 17:32:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.622 17:32:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.622 17:32:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:06.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:22:06.622 00:22:06.622 --- 10.0.0.2 ping statistics --- 00:22:06.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.622 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:06.622 17:32:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:22:06.622 00:22:06.622 --- 10.0.0.1 ping statistics --- 00:22:06.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.622 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:06.622 17:32:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.622 17:32:45 -- nvmf/common.sh@410 -- # return 0 00:22:06.622 17:32:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:06.622 17:32:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.622 17:32:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:06.622 17:32:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:06.622 17:32:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.622 17:32:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:06.622 17:32:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:06.622 17:32:45 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4171113 00:22:06.622 17:32:45 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:06.622 17:32:45 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:06.622 17:32:45 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4171113 00:22:06.622 17:32:45 -- common/autotest_common.sh@819 -- # '[' -z 4171113 ']' 00:22:06.622 17:32:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.622 17:32:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:06.622 17:32:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.622 17:32:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:06.622 17:32:45 -- common/autotest_common.sh@10 -- # set +x 00:22:07.560 17:32:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:07.560 17:32:46 -- common/autotest_common.sh@852 -- # return 0 00:22:07.560 17:32:46 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.560 17:32:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.560 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:22:07.560 17:32:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.560 17:32:46 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:07.560 17:32:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.560 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:22:07.819 Malloc0 00:22:07.819 17:32:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.819 17:32:46 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.819 17:32:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.819 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:22:07.819 17:32:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.819 17:32:46 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.819 17:32:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.819 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:22:07.819 17:32:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.819 17:32:46 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.819 17:32:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.819 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:22:07.819 17:32:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.819 17:32:46 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:07.819 17:32:46 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:39.911 Fuzzing completed. Shutting down the fuzz application 00:22:39.911 00:22:39.911 Dumping successful admin opcodes: 00:22:39.911 8, 9, 10, 24, 00:22:39.911 Dumping successful io opcodes: 00:22:39.911 0, 9, 00:22:39.911 NS: 0x200003aeff00 I/O qp, Total commands completed: 639816, total successful commands: 3727, random_seed: 2043896896 00:22:39.911 NS: 0x200003aeff00 admin qp, Total commands completed: 77261, total successful commands: 598, random_seed: 1688657280 00:22:39.912 17:33:16 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:39.912 Fuzzing completed. Shutting down the fuzz application 00:22:39.912 00:22:39.912 Dumping successful admin opcodes: 00:22:39.912 24, 00:22:39.912 Dumping successful io opcodes: 00:22:39.912 00:22:39.912 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3783821510 00:22:39.912 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3783924278 00:22:39.912 17:33:18 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.912 17:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.912 17:33:18 -- common/autotest_common.sh@10 -- # set +x 00:22:39.912 17:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.912 17:33:18 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:39.912 17:33:18 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:39.912 17:33:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:39.912 17:33:18 -- nvmf/common.sh@116 -- # sync 00:22:39.912 17:33:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:39.912 17:33:18 -- nvmf/common.sh@119 -- # set +e 00:22:39.912 17:33:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:39.912 17:33:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:39.912 rmmod nvme_tcp 00:22:39.912 rmmod nvme_fabrics 00:22:39.912 rmmod nvme_keyring 00:22:39.912 17:33:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:39.912 17:33:18 -- nvmf/common.sh@123 -- # set -e 00:22:39.912 17:33:18 -- nvmf/common.sh@124 -- # return 0 00:22:39.912 17:33:18 -- nvmf/common.sh@477 -- # '[' -n 4171113 ']' 00:22:39.912 17:33:18 -- nvmf/common.sh@478 -- # killprocess 4171113 00:22:39.912 17:33:18 -- common/autotest_common.sh@926 -- # '[' -z 4171113 ']' 00:22:39.912 17:33:18 -- common/autotest_common.sh@930 -- # kill -0 4171113 00:22:39.912 17:33:18 -- common/autotest_common.sh@931 -- # uname 00:22:39.912 17:33:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:39.912 17:33:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4171113 00:22:39.912 17:33:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:39.912 17:33:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:39.912 17:33:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4171113' 00:22:39.912 killing process with pid 4171113 00:22:39.912 17:33:18 -- common/autotest_common.sh@945 -- # kill 4171113 00:22:39.912 17:33:18 -- common/autotest_common.sh@950 -- # wait 4171113 00:22:39.912 17:33:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:39.912 17:33:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:39.912 17:33:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:39.912 17:33:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.912 17:33:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:39.912 17:33:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.912 17:33:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.912 17:33:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.872 17:33:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:41.872 17:33:20 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:41.872 00:22:41.872 real 0m40.827s 00:22:41.872 user 0m54.479s 00:22:41.872 sys 0m15.612s 00:22:41.872 17:33:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.872 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:22:41.872 ************************************ 00:22:41.872 END TEST nvmf_fuzz 00:22:41.872 ************************************ 00:22:41.872 17:33:20 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:41.872 17:33:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:41.872 17:33:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:41.872 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:22:41.872 ************************************ 00:22:41.872 START TEST nvmf_multiconnection 00:22:41.872 ************************************ 00:22:41.872 17:33:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:41.872 * Looking for test storage... 00:22:41.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.872 17:33:20 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.872 17:33:20 -- nvmf/common.sh@7 -- # uname -s 00:22:42.131 17:33:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.131 17:33:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.131 17:33:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.131 17:33:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.131 17:33:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.131 17:33:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.131 17:33:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.131 17:33:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.131 17:33:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.131 17:33:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.131 17:33:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:42.131 17:33:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:42.131 17:33:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.131 17:33:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.131 17:33:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.131 17:33:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.131 17:33:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.131 17:33:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.131 17:33:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.131 17:33:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.131 17:33:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.131 17:33:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.131 17:33:20 -- paths/export.sh@5 -- # export PATH 00:22:42.131 17:33:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.131 17:33:20 -- nvmf/common.sh@46 -- # : 0 00:22:42.131 17:33:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:42.131 17:33:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:42.131 17:33:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:42.131 17:33:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.131 17:33:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.131 17:33:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:42.131 17:33:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:42.131 17:33:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:42.131 17:33:20 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.131 17:33:20 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.131 17:33:20 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:42.131 17:33:20 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:42.131 17:33:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:42.131 17:33:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.131 17:33:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:42.131 17:33:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:42.131 17:33:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:42.131 17:33:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.131 17:33:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.131 17:33:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.131 17:33:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:42.131 17:33:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:42.131 17:33:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:42.131 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.407 17:33:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:47.407 17:33:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:47.407 17:33:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:47.407 17:33:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:47.407 17:33:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:47.407 17:33:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:47.407 17:33:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:47.407 17:33:26 -- nvmf/common.sh@294 -- # net_devs=() 00:22:47.407 17:33:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:47.407 17:33:26 -- nvmf/common.sh@295 -- # e810=() 00:22:47.407 17:33:26 -- nvmf/common.sh@295 -- # local -ga e810 00:22:47.407 17:33:26 -- nvmf/common.sh@296 -- # x722=() 00:22:47.407 17:33:26 -- nvmf/common.sh@296 -- # local -ga x722 00:22:47.407 17:33:26 -- nvmf/common.sh@297 -- # mlx=() 00:22:47.407 17:33:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:47.407 17:33:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.407 17:33:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:47.407 17:33:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:47.407 17:33:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:47.407 17:33:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:47.407 17:33:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:47.407 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:47.407 17:33:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:47.407 17:33:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:47.407 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:47.407 17:33:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:47.407 17:33:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:47.407 17:33:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.407 17:33:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:47.407 17:33:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.407 17:33:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:47.407 Found net devices under 0000:af:00.0: cvl_0_0 00:22:47.407 17:33:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.407 17:33:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:47.407 17:33:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.407 17:33:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:47.407 17:33:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.407 17:33:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:47.407 Found net devices under 0000:af:00.1: cvl_0_1 00:22:47.407 17:33:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.407 17:33:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:47.407 17:33:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:47.407 17:33:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:47.407 17:33:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:47.407 17:33:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.407 17:33:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.407 17:33:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.407 17:33:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:47.407 17:33:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.407 17:33:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.407 17:33:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:47.407 17:33:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.407 17:33:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.407 17:33:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:47.407 17:33:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:47.407 17:33:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.407 17:33:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.666 17:33:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.666 17:33:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.666 17:33:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:47.666 17:33:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.666 17:33:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.666 17:33:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.666 17:33:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:47.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:22:47.666 00:22:47.666 --- 10.0.0.2 ping statistics --- 00:22:47.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.666 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:47.666 17:33:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:22:47.666 00:22:47.666 --- 10.0.0.1 ping statistics --- 00:22:47.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.666 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:47.666 17:33:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.666 17:33:26 -- nvmf/common.sh@410 -- # return 0 00:22:47.666 17:33:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:47.666 17:33:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.666 17:33:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:47.666 17:33:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:47.666 17:33:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.666 17:33:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:47.666 17:33:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:47.666 17:33:26 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:47.666 17:33:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:47.666 17:33:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:47.666 17:33:26 -- common/autotest_common.sh@10 -- # set +x 00:22:47.666 17:33:26 -- nvmf/common.sh@469 -- # nvmfpid=4180934 00:22:47.666 17:33:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:47.666 17:33:26 -- nvmf/common.sh@470 -- # waitforlisten 4180934 00:22:47.666 17:33:26 -- common/autotest_common.sh@819 -- # '[' -z 4180934 ']' 00:22:47.666 17:33:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.666 17:33:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:47.666 17:33:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.666 17:33:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:47.666 17:33:26 -- common/autotest_common.sh@10 -- # set +x 00:22:47.924 [2024-07-12 17:33:26.669046] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:47.924 [2024-07-12 17:33:26.669100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.924 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.924 [2024-07-12 17:33:26.756007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.924 [2024-07-12 17:33:26.799650] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:47.924 [2024-07-12 17:33:26.799804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.924 [2024-07-12 17:33:26.799817] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.924 [2024-07-12 17:33:26.799827] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.924 [2024-07-12 17:33:26.799874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.924 [2024-07-12 17:33:26.799891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.924 [2024-07-12 17:33:26.799989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.924 [2024-07-12 17:33:26.799991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.859 17:33:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:48.859 17:33:27 -- common/autotest_common.sh@852 -- # return 0 00:22:48.859 17:33:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:48.859 17:33:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.859 17:33:27 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 [2024-07-12 17:33:27.567000] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:48.859 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.859 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 Malloc1 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 [2024-07-12 17:33:27.626912] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.859 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 Malloc2 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.859 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 Malloc3 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:48.859 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.859 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.859 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.859 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.860 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 Malloc4 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.860 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 Malloc5 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.860 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:48.860 Malloc6 00:22:48.860 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.860 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:48.860 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.860 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.118 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 Malloc7 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.118 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 Malloc8 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.118 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 Malloc9 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.118 17:33:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:49.118 17:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:27 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 Malloc10 00:22:49.118 17:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:49.118 17:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:49.118 17:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:49.118 17:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.118 17:33:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:49.118 17:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 Malloc11 00:22:49.118 17:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:49.118 17:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:49.118 17:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:49.118 17:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.118 17:33:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.118 17:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.118 17:33:28 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:49.377 17:33:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.377 17:33:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:50.753 17:33:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:50.753 17:33:29 -- common/autotest_common.sh@1177 -- # local i=0 00:22:50.753 17:33:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:50.753 17:33:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:50.753 17:33:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:52.657 17:33:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:52.657 17:33:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:52.657 17:33:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:52.657 17:33:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:52.657 17:33:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.657 17:33:31 -- common/autotest_common.sh@1187 -- # return 0 00:22:52.657 17:33:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.657 17:33:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:54.047 17:33:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:54.047 17:33:32 -- common/autotest_common.sh@1177 -- # local i=0 00:22:54.047 17:33:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:54.047 17:33:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:54.047 17:33:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:55.946 17:33:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:55.946 17:33:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:55.946 17:33:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:55.946 17:33:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:55.946 17:33:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:55.946 17:33:34 -- common/autotest_common.sh@1187 -- # return 0 00:22:55.946 17:33:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.946 17:33:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:57.322 17:33:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:57.322 17:33:36 -- common/autotest_common.sh@1177 -- # local i=0 00:22:57.322 17:33:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:57.322 17:33:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:57.322 17:33:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:59.226 17:33:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:59.226 17:33:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:59.226 17:33:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:59.226 17:33:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:59.485 17:33:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:59.485 17:33:38 -- common/autotest_common.sh@1187 -- # return 0 00:22:59.485 17:33:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.485 17:33:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:00.859 17:33:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:00.859 17:33:39 -- common/autotest_common.sh@1177 -- # local i=0 00:23:00.859 17:33:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:00.859 17:33:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:00.859 17:33:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:02.756 17:33:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:02.756 17:33:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:02.756 17:33:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:23:02.756 17:33:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:02.756 17:33:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:02.756 17:33:41 -- common/autotest_common.sh@1187 -- # return 0 00:23:02.756 17:33:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.756 17:33:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:04.129 17:33:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:04.129 17:33:42 -- common/autotest_common.sh@1177 -- # local i=0 00:23:04.129 17:33:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:04.129 17:33:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:04.129 17:33:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:06.030 17:33:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:06.287 17:33:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:06.287 17:33:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:23:06.287 17:33:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:06.287 17:33:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:06.287 17:33:45 -- common/autotest_common.sh@1187 -- # return 0 00:23:06.287 17:33:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.287 17:33:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:07.663 17:33:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:07.663 17:33:46 -- common/autotest_common.sh@1177 -- # local i=0 00:23:07.663 17:33:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:07.663 17:33:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:07.663 17:33:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:09.567 17:33:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:09.567 17:33:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:09.567 17:33:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:23:09.567 17:33:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:09.567 17:33:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:09.567 17:33:48 -- common/autotest_common.sh@1187 -- # return 0 00:23:09.567 17:33:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.567 17:33:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:10.945 17:33:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:10.945 17:33:49 -- common/autotest_common.sh@1177 -- # local i=0 00:23:10.945 17:33:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:10.945 17:33:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:10.945 17:33:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:13.478 17:33:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:13.478 17:33:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:13.478 17:33:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:23:13.478 17:33:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:13.478 17:33:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:13.478 17:33:51 -- common/autotest_common.sh@1187 -- # return 0 00:23:13.478 17:33:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.478 17:33:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:14.857 17:33:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:14.857 17:33:53 -- common/autotest_common.sh@1177 -- # local i=0 00:23:14.857 17:33:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:14.857 17:33:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:14.857 17:33:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:16.762 17:33:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:16.762 17:33:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:16.762 17:33:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:23:16.762 17:33:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:16.762 17:33:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:16.762 17:33:55 -- common/autotest_common.sh@1187 -- # return 0 00:23:16.762 17:33:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.762 17:33:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:18.198 17:33:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:18.198 17:33:56 -- common/autotest_common.sh@1177 -- # local i=0 00:23:18.198 17:33:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:18.198 17:33:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:18.198 17:33:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:20.101 17:33:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:20.101 17:33:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:20.101 17:33:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:23:20.101 17:33:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:20.101 17:33:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:20.101 17:33:58 -- common/autotest_common.sh@1187 -- # return 0 00:23:20.101 17:33:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.101 17:33:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:21.999 17:34:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:21.999 17:34:00 -- common/autotest_common.sh@1177 -- # local i=0 00:23:21.999 17:34:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.999 17:34:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:21.999 17:34:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:23.901 17:34:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:23.901 17:34:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:23.901 17:34:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:23:23.901 17:34:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:23.901 17:34:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:23.901 17:34:02 -- common/autotest_common.sh@1187 -- # return 0 00:23:23.901 17:34:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.901 17:34:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:25.275 17:34:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:25.275 17:34:04 -- common/autotest_common.sh@1177 -- # local i=0 00:23:25.275 17:34:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:25.275 17:34:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:25.275 17:34:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:27.814 17:34:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:27.814 17:34:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:27.814 17:34:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:23:27.814 17:34:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:27.814 17:34:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:27.814 17:34:06 -- common/autotest_common.sh@1187 -- # return 0 00:23:27.814 17:34:06 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:27.814 [global] 00:23:27.814 thread=1 00:23:27.814 invalidate=1 00:23:27.814 rw=read 00:23:27.814 time_based=1 00:23:27.814 runtime=10 00:23:27.814 ioengine=libaio 00:23:27.814 direct=1 00:23:27.814 bs=262144 00:23:27.814 iodepth=64 00:23:27.814 norandommap=1 00:23:27.814 numjobs=1 00:23:27.814 00:23:27.814 [job0] 00:23:27.814 filename=/dev/nvme0n1 00:23:27.814 [job1] 00:23:27.814 filename=/dev/nvme10n1 00:23:27.814 [job2] 00:23:27.814 filename=/dev/nvme1n1 00:23:27.814 [job3] 00:23:27.814 filename=/dev/nvme2n1 00:23:27.814 [job4] 00:23:27.814 filename=/dev/nvme3n1 00:23:27.814 [job5] 00:23:27.814 filename=/dev/nvme4n1 00:23:27.814 [job6] 00:23:27.814 filename=/dev/nvme5n1 00:23:27.814 [job7] 00:23:27.814 filename=/dev/nvme6n1 00:23:27.814 [job8] 00:23:27.814 filename=/dev/nvme7n1 00:23:27.814 [job9] 00:23:27.814 filename=/dev/nvme8n1 00:23:27.814 [job10] 00:23:27.814 filename=/dev/nvme9n1 00:23:27.814 Could not set queue depth (nvme0n1) 00:23:27.814 Could not set queue depth (nvme10n1) 00:23:27.814 Could not set queue depth (nvme1n1) 00:23:27.814 Could not set queue depth (nvme2n1) 00:23:27.814 Could not set queue depth (nvme3n1) 00:23:27.814 Could not set queue depth (nvme4n1) 00:23:27.814 Could not set queue depth (nvme5n1) 00:23:27.814 Could not set queue depth (nvme6n1) 00:23:27.814 Could not set queue depth (nvme7n1) 00:23:27.814 Could not set queue depth (nvme8n1) 00:23:27.814 Could not set queue depth (nvme9n1) 00:23:27.814 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:27.814 fio-3.35 00:23:27.814 Starting 11 threads 00:23:40.016 00:23:40.016 job0: (groupid=0, jobs=1): err= 0: pid=4188601: Fri Jul 12 17:34:17 2024 00:23:40.016 read: IOPS=619, BW=155MiB/s (162MB/s)(1552MiB/10025msec) 00:23:40.016 slat (usec): min=13, max=85685, avg=1253.73, stdev=4780.24 00:23:40.016 clat (msec): min=3, max=297, avg=102.05, stdev=52.36 00:23:40.016 lat (msec): min=3, max=297, avg=103.30, stdev=53.26 00:23:40.016 clat percentiles (msec): 00:23:40.016 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 51], 20.00th=[ 61], 00:23:40.016 | 30.00th=[ 67], 40.00th=[ 79], 50.00th=[ 91], 60.00th=[ 101], 00:23:40.016 | 70.00th=[ 120], 80.00th=[ 148], 90.00th=[ 190], 95.00th=[ 207], 00:23:40.016 | 99.00th=[ 222], 99.50th=[ 232], 99.90th=[ 251], 99.95th=[ 264], 00:23:40.016 | 99.99th=[ 296] 00:23:40.016 bw ( KiB/s): min=68608, max=277504, per=7.68%, avg=157260.80, stdev=63962.43, samples=20 00:23:40.016 iops : min= 268, max= 1084, avg=614.30, stdev=249.85, samples=20 00:23:40.016 lat (msec) : 4=0.06%, 10=1.00%, 20=1.68%, 50=7.25%, 100=50.39% 00:23:40.016 lat (msec) : 250=39.49%, 500=0.13% 00:23:40.016 cpu : usr=0.23%, sys=2.31%, ctx=1289, majf=0, minf=4097 00:23:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.016 issued rwts: total=6206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.016 job1: (groupid=0, jobs=1): err= 0: pid=4188603: Fri Jul 12 17:34:17 2024 00:23:40.016 read: IOPS=701, BW=175MiB/s (184MB/s)(1772MiB/10095msec) 00:23:40.016 slat (usec): min=12, max=149704, avg=1003.23, stdev=4173.89 00:23:40.016 clat (msec): min=5, max=246, avg=90.09, stdev=46.73 00:23:40.016 lat (msec): min=5, max=348, avg=91.09, stdev=47.33 00:23:40.016 clat percentiles (msec): 00:23:40.016 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 45], 20.00th=[ 58], 00:23:40.016 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 79], 60.00th=[ 88], 00:23:40.016 | 70.00th=[ 97], 80.00th=[ 121], 90.00th=[ 169], 95.00th=[ 194], 00:23:40.016 | 99.00th=[ 218], 99.50th=[ 224], 99.90th=[ 241], 99.95th=[ 241], 00:23:40.016 | 99.99th=[ 247] 00:23:40.016 bw ( KiB/s): min=96256, max=276992, per=8.78%, avg=179788.80, stdev=55596.66, samples=20 00:23:40.016 iops : min= 376, max= 1082, avg=702.30, stdev=217.17, samples=20 00:23:40.016 lat (msec) : 10=0.51%, 20=2.13%, 50=10.87%, 100=58.66%, 250=27.83% 00:23:40.016 cpu : usr=0.27%, sys=2.25%, ctx=1538, majf=0, minf=4097 00:23:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.016 issued rwts: total=7086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.016 job2: (groupid=0, jobs=1): err= 0: pid=4188604: Fri Jul 12 17:34:17 2024 00:23:40.016 read: IOPS=658, BW=165MiB/s (173MB/s)(1651MiB/10027msec) 00:23:40.016 slat (usec): min=12, max=167496, avg=1074.88, stdev=4986.85 00:23:40.016 clat (msec): min=5, max=243, avg=96.01, stdev=44.33 00:23:40.016 lat (msec): min=5, max=377, avg=97.08, stdev=45.07 00:23:40.016 clat percentiles (msec): 00:23:40.016 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 61], 00:23:40.016 | 30.00th=[ 68], 40.00th=[ 77], 50.00th=[ 88], 60.00th=[ 100], 00:23:40.016 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 161], 95.00th=[ 178], 00:23:40.016 | 99.00th=[ 218], 99.50th=[ 222], 99.90th=[ 228], 99.95th=[ 234], 00:23:40.016 | 99.99th=[ 245] 00:23:40.016 bw ( KiB/s): min=97280, max=260096, per=8.18%, avg=167449.60, stdev=40951.74, samples=20 00:23:40.016 iops : min= 380, max= 1016, avg=654.10, stdev=159.97, samples=20 00:23:40.016 lat (msec) : 10=0.39%, 20=1.73%, 50=9.43%, 100=48.94%, 250=39.51% 00:23:40.016 cpu : usr=0.24%, sys=2.26%, ctx=1352, majf=0, minf=4097 00:23:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.016 issued rwts: total=6604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.016 job3: (groupid=0, jobs=1): err= 0: pid=4188605: Fri Jul 12 17:34:17 2024 00:23:40.016 read: IOPS=888, BW=222MiB/s (233MB/s)(2242MiB/10091msec) 00:23:40.016 slat (usec): min=11, max=187792, avg=896.80, stdev=3531.63 00:23:40.016 clat (usec): min=1877, max=295188, avg=71034.23, stdev=37730.01 00:23:40.016 lat (usec): min=1930, max=384156, avg=71931.04, stdev=38074.55 00:23:40.016 clat percentiles (msec): 00:23:40.016 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 31], 20.00th=[ 43], 00:23:40.016 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 71], 00:23:40.016 | 70.00th=[ 80], 80.00th=[ 99], 90.00th=[ 122], 95.00th=[ 138], 00:23:40.016 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 284], 99.95th=[ 284], 00:23:40.016 | 99.99th=[ 296] 00:23:40.016 bw ( KiB/s): min=70656, max=402432, per=11.14%, avg=227993.60, stdev=84053.91, samples=20 00:23:40.016 iops : min= 276, max= 1572, avg=890.60, stdev=328.34, samples=20 00:23:40.016 lat (msec) : 2=0.01%, 4=0.09%, 10=0.80%, 20=3.12%, 50=24.74% 00:23:40.016 lat (msec) : 100=52.17%, 250=18.78%, 500=0.29% 00:23:40.016 cpu : usr=0.34%, sys=3.21%, ctx=1602, majf=0, minf=3221 00:23:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.016 issued rwts: total=8969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.016 job4: (groupid=0, jobs=1): err= 0: pid=4188606: Fri Jul 12 17:34:17 2024 00:23:40.016 read: IOPS=935, BW=234MiB/s (245MB/s)(2360MiB/10088msec) 00:23:40.016 slat (usec): min=10, max=123827, avg=837.18, stdev=3448.79 00:23:40.016 clat (usec): min=894, max=301739, avg=67505.42, stdev=46964.90 00:23:40.016 lat (usec): min=924, max=301789, avg=68342.60, stdev=47534.21 00:23:40.016 clat percentiles (msec): 00:23:40.016 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 27], 00:23:40.016 | 30.00th=[ 32], 40.00th=[ 44], 50.00th=[ 54], 60.00th=[ 65], 00:23:40.016 | 70.00th=[ 84], 80.00th=[ 106], 90.00th=[ 132], 95.00th=[ 165], 00:23:40.016 | 99.00th=[ 220], 99.50th=[ 226], 99.90th=[ 249], 99.95th=[ 257], 00:23:40.016 | 99.99th=[ 300] 00:23:40.016 bw ( KiB/s): min=88240, max=584704, per=11.73%, avg=240034.40, stdev=137043.11, samples=20 00:23:40.016 iops : min= 344, max= 2284, avg=937.60, stdev=535.36, samples=20 00:23:40.016 lat (usec) : 1000=0.01% 00:23:40.016 lat (msec) : 2=0.03%, 4=0.04%, 10=0.89%, 20=2.42%, 50=43.31% 00:23:40.016 lat (msec) : 100=31.58%, 250=21.62%, 500=0.10% 00:23:40.016 cpu : usr=0.28%, sys=3.20%, ctx=1848, majf=0, minf=4097 00:23:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.016 issued rwts: total=9439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.016 job5: (groupid=0, jobs=1): err= 0: pid=4188609: Fri Jul 12 17:34:17 2024 00:23:40.016 read: IOPS=646, BW=162MiB/s (170MB/s)(1631MiB/10091msec) 00:23:40.016 slat (usec): min=11, max=98681, avg=1134.92, stdev=5290.69 00:23:40.016 clat (usec): min=876, max=293215, avg=97749.02, stdev=53748.89 00:23:40.016 lat (usec): min=913, max=314295, avg=98883.94, stdev=54703.93 00:23:40.016 clat percentiles (msec): 00:23:40.016 | 1.00th=[ 4], 5.00th=[ 19], 10.00th=[ 40], 20.00th=[ 61], 00:23:40.016 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 99], 00:23:40.016 | 70.00th=[ 118], 80.00th=[ 150], 90.00th=[ 184], 95.00th=[ 201], 00:23:40.016 | 99.00th=[ 220], 99.50th=[ 228], 99.90th=[ 255], 99.95th=[ 275], 00:23:40.016 | 99.99th=[ 292] 00:23:40.016 bw ( KiB/s): min=76288, max=268288, per=8.08%, avg=165401.60, stdev=59186.63, samples=20 00:23:40.016 iops : min= 298, max= 1048, avg=646.10, stdev=231.20, samples=20 00:23:40.016 lat (usec) : 1000=0.02% 00:23:40.016 lat (msec) : 2=0.12%, 4=0.89%, 10=0.67%, 20=3.88%, 50=7.26% 00:23:40.016 lat (msec) : 100=47.82%, 250=39.23%, 500=0.11% 00:23:40.016 cpu : usr=0.23%, sys=2.48%, ctx=1436, majf=0, minf=4097 00:23:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.016 issued rwts: total=6525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.016 job6: (groupid=0, jobs=1): err= 0: pid=4188610: Fri Jul 12 17:34:17 2024 00:23:40.016 read: IOPS=539, BW=135MiB/s (141MB/s)(1361MiB/10092msec) 00:23:40.016 slat (usec): min=11, max=167090, avg=1595.30, stdev=6215.74 00:23:40.016 clat (usec): min=1006, max=373915, avg=116922.66, stdev=52722.32 00:23:40.016 lat (usec): min=1034, max=373947, avg=118517.96, stdev=53676.42 00:23:40.016 clat percentiles (msec): 00:23:40.016 | 1.00th=[ 8], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 69], 00:23:40.016 | 30.00th=[ 90], 40.00th=[ 107], 50.00th=[ 118], 60.00th=[ 127], 00:23:40.016 | 70.00th=[ 138], 80.00th=[ 161], 90.00th=[ 197], 95.00th=[ 209], 00:23:40.016 | 99.00th=[ 230], 99.50th=[ 239], 99.90th=[ 326], 99.95th=[ 338], 00:23:40.016 | 99.99th=[ 376] 00:23:40.016 bw ( KiB/s): min=63488, max=225792, per=6.73%, avg=137728.00, stdev=48928.62, samples=20 00:23:40.016 iops : min= 248, max= 882, avg=538.00, stdev=191.13, samples=20 00:23:40.017 lat (msec) : 2=0.26%, 4=0.13%, 10=1.21%, 20=0.61%, 50=9.11% 00:23:40.017 lat (msec) : 100=25.28%, 250=63.21%, 500=0.20% 00:23:40.017 cpu : usr=0.26%, sys=1.92%, ctx=1179, majf=0, minf=4097 00:23:40.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:40.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.017 issued rwts: total=5444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.017 job7: (groupid=0, jobs=1): err= 0: pid=4188612: Fri Jul 12 17:34:17 2024 00:23:40.017 read: IOPS=838, BW=210MiB/s (220MB/s)(2116MiB/10092msec) 00:23:40.017 slat (usec): min=10, max=143662, avg=977.88, stdev=4518.20 00:23:40.017 clat (msec): min=2, max=340, avg=75.27, stdev=43.17 00:23:40.017 lat (msec): min=2, max=359, avg=76.25, stdev=43.86 00:23:40.017 clat percentiles (msec): 00:23:40.017 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 42], 00:23:40.017 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71], 00:23:40.017 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 138], 95.00th=[ 178], 00:23:40.017 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 232], 99.95th=[ 247], 00:23:40.017 | 99.99th=[ 342] 00:23:40.017 bw ( KiB/s): min=63488, max=441856, per=10.51%, avg=215070.35, stdev=89554.27, samples=20 00:23:40.017 iops : min= 248, max= 1726, avg=840.10, stdev=349.80, samples=20 00:23:40.017 lat (msec) : 4=0.09%, 10=0.82%, 20=1.22%, 50=24.23%, 100=55.26% 00:23:40.017 lat (msec) : 250=18.34%, 500=0.04% 00:23:40.017 cpu : usr=0.43%, sys=2.94%, ctx=1596, majf=0, minf=4097 00:23:40.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:40.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.017 issued rwts: total=8463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.017 job8: (groupid=0, jobs=1): err= 0: pid=4188613: Fri Jul 12 17:34:17 2024 00:23:40.017 read: IOPS=748, BW=187MiB/s (196MB/s)(1876MiB/10023msec) 00:23:40.017 slat (usec): min=12, max=117353, avg=943.53, stdev=4929.65 00:23:40.017 clat (usec): min=906, max=309873, avg=84456.95, stdev=54711.83 00:23:40.017 lat (usec): min=935, max=321503, avg=85400.48, stdev=55595.52 00:23:40.017 clat percentiles (msec): 00:23:40.017 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 41], 00:23:40.017 | 30.00th=[ 50], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 85], 00:23:40.017 | 70.00th=[ 99], 80.00th=[ 128], 90.00th=[ 174], 95.00th=[ 203], 00:23:40.017 | 99.00th=[ 224], 99.50th=[ 230], 99.90th=[ 284], 99.95th=[ 288], 00:23:40.017 | 99.99th=[ 309] 00:23:40.017 bw ( KiB/s): min=73728, max=426496, per=9.31%, avg=190524.75, stdev=90050.72, samples=20 00:23:40.017 iops : min= 288, max= 1666, avg=744.20, stdev=351.80, samples=20 00:23:40.017 lat (usec) : 1000=0.04% 00:23:40.017 lat (msec) : 2=0.20%, 4=0.32%, 10=2.52%, 20=5.42%, 50=21.67% 00:23:40.017 lat (msec) : 100=41.17%, 250=28.45%, 500=0.21% 00:23:40.017 cpu : usr=0.20%, sys=2.60%, ctx=1660, majf=0, minf=4097 00:23:40.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:40.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.017 issued rwts: total=7505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.017 job9: (groupid=0, jobs=1): err= 0: pid=4188614: Fri Jul 12 17:34:17 2024 00:23:40.017 read: IOPS=878, BW=220MiB/s (230MB/s)(2215MiB/10091msec) 00:23:40.017 slat (usec): min=11, max=115773, avg=836.62, stdev=3705.15 00:23:40.017 clat (usec): min=370, max=305801, avg=71973.99, stdev=48147.51 00:23:40.017 lat (usec): min=1141, max=351627, avg=72810.61, stdev=48658.23 00:23:40.017 clat percentiles (msec): 00:23:40.017 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 25], 20.00th=[ 35], 00:23:40.017 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 58], 60.00th=[ 71], 00:23:40.017 | 70.00th=[ 91], 80.00th=[ 110], 90.00th=[ 140], 95.00th=[ 178], 00:23:40.017 | 99.00th=[ 211], 99.50th=[ 224], 99.90th=[ 236], 99.95th=[ 247], 00:23:40.017 | 99.99th=[ 305] 00:23:40.017 bw ( KiB/s): min=89600, max=472064, per=11.00%, avg=225228.80, stdev=107717.16, samples=20 00:23:40.017 iops : min= 350, max= 1844, avg=879.80, stdev=420.77, samples=20 00:23:40.017 lat (usec) : 500=0.01% 00:23:40.017 lat (msec) : 2=0.08%, 4=0.24%, 10=3.67%, 20=3.78%, 50=35.56% 00:23:40.017 lat (msec) : 100=31.35%, 250=25.29%, 500=0.02% 00:23:40.017 cpu : usr=0.34%, sys=3.15%, ctx=1636, majf=0, minf=4097 00:23:40.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:40.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.017 issued rwts: total=8861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.017 job10: (groupid=0, jobs=1): err= 0: pid=4188615: Fri Jul 12 17:34:17 2024 00:23:40.017 read: IOPS=556, BW=139MiB/s (146MB/s)(1405MiB/10095msec) 00:23:40.017 slat (usec): min=14, max=149394, avg=1653.39, stdev=6539.50 00:23:40.017 clat (msec): min=3, max=346, avg=113.21, stdev=52.18 00:23:40.017 lat (msec): min=3, max=346, avg=114.87, stdev=53.11 00:23:40.017 clat percentiles (msec): 00:23:40.017 | 1.00th=[ 21], 5.00th=[ 40], 10.00th=[ 51], 20.00th=[ 66], 00:23:40.017 | 30.00th=[ 78], 40.00th=[ 92], 50.00th=[ 109], 60.00th=[ 126], 00:23:40.017 | 70.00th=[ 138], 80.00th=[ 161], 90.00th=[ 190], 95.00th=[ 209], 00:23:40.017 | 99.00th=[ 234], 99.50th=[ 243], 99.90th=[ 275], 99.95th=[ 321], 00:23:40.017 | 99.99th=[ 347] 00:23:40.017 bw ( KiB/s): min=64000, max=237568, per=6.95%, avg=142208.00, stdev=46466.98, samples=20 00:23:40.017 iops : min= 250, max= 928, avg=555.50, stdev=181.51, samples=20 00:23:40.017 lat (msec) : 4=0.07%, 10=0.09%, 20=0.69%, 50=8.81%, 100=36.25% 00:23:40.017 lat (msec) : 250=53.92%, 500=0.16% 00:23:40.017 cpu : usr=0.23%, sys=2.26%, ctx=1104, majf=0, minf=4097 00:23:40.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:40.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:40.017 issued rwts: total=5619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:40.017 00:23:40.017 Run status group 0 (all jobs): 00:23:40.017 READ: bw=1999MiB/s (2096MB/s), 135MiB/s-234MiB/s (141MB/s-245MB/s), io=19.7GiB (21.2GB), run=10023-10095msec 00:23:40.017 00:23:40.017 Disk stats (read/write): 00:23:40.017 nvme0n1: ios=11832/0, merge=0/0, ticks=1217808/0, in_queue=1217808, util=96.06% 00:23:40.017 nvme10n1: ios=14102/0, merge=0/0, ticks=1252684/0, in_queue=1252684, util=96.46% 00:23:40.017 nvme1n1: ios=12793/0, merge=0/0, ticks=1226969/0, in_queue=1226969, util=96.80% 00:23:40.017 nvme2n1: ios=17869/0, merge=0/0, ticks=1253952/0, in_queue=1253952, util=97.12% 00:23:40.017 nvme3n1: ios=18837/0, merge=0/0, ticks=1252894/0, in_queue=1252894, util=97.24% 00:23:40.017 nvme4n1: ios=12967/0, merge=0/0, ticks=1252632/0, in_queue=1252632, util=97.77% 00:23:40.017 nvme5n1: ios=10839/0, merge=0/0, ticks=1249242/0, in_queue=1249242, util=98.01% 00:23:40.017 nvme6n1: ios=16876/0, merge=0/0, ticks=1248452/0, in_queue=1248452, util=98.19% 00:23:40.017 nvme7n1: ios=14490/0, merge=0/0, ticks=1225918/0, in_queue=1225918, util=98.76% 00:23:40.017 nvme8n1: ios=17655/0, merge=0/0, ticks=1252588/0, in_queue=1252588, util=99.08% 00:23:40.017 nvme9n1: ios=11170/0, merge=0/0, ticks=1245194/0, in_queue=1245194, util=99.31% 00:23:40.017 17:34:17 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:40.017 [global] 00:23:40.017 thread=1 00:23:40.017 invalidate=1 00:23:40.017 rw=randwrite 00:23:40.017 time_based=1 00:23:40.017 runtime=10 00:23:40.017 ioengine=libaio 00:23:40.017 direct=1 00:23:40.017 bs=262144 00:23:40.017 iodepth=64 00:23:40.017 norandommap=1 00:23:40.017 numjobs=1 00:23:40.017 00:23:40.017 [job0] 00:23:40.017 filename=/dev/nvme0n1 00:23:40.017 [job1] 00:23:40.017 filename=/dev/nvme10n1 00:23:40.017 [job2] 00:23:40.017 filename=/dev/nvme1n1 00:23:40.017 [job3] 00:23:40.017 filename=/dev/nvme2n1 00:23:40.017 [job4] 00:23:40.017 filename=/dev/nvme3n1 00:23:40.017 [job5] 00:23:40.017 filename=/dev/nvme4n1 00:23:40.017 [job6] 00:23:40.017 filename=/dev/nvme5n1 00:23:40.017 [job7] 00:23:40.017 filename=/dev/nvme6n1 00:23:40.017 [job8] 00:23:40.017 filename=/dev/nvme7n1 00:23:40.017 [job9] 00:23:40.017 filename=/dev/nvme8n1 00:23:40.017 [job10] 00:23:40.017 filename=/dev/nvme9n1 00:23:40.017 Could not set queue depth (nvme0n1) 00:23:40.017 Could not set queue depth (nvme10n1) 00:23:40.017 Could not set queue depth (nvme1n1) 00:23:40.017 Could not set queue depth (nvme2n1) 00:23:40.017 Could not set queue depth (nvme3n1) 00:23:40.017 Could not set queue depth (nvme4n1) 00:23:40.017 Could not set queue depth (nvme5n1) 00:23:40.017 Could not set queue depth (nvme6n1) 00:23:40.017 Could not set queue depth (nvme7n1) 00:23:40.017 Could not set queue depth (nvme8n1) 00:23:40.017 Could not set queue depth (nvme9n1) 00:23:40.017 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:40.017 fio-3.35 00:23:40.017 Starting 11 threads 00:23:49.993 00:23:49.993 job0: (groupid=0, jobs=1): err= 0: pid=4190343: Fri Jul 12 17:34:28 2024 00:23:49.993 write: IOPS=448, BW=112MiB/s (117MB/s)(1130MiB/10091msec); 0 zone resets 00:23:49.993 slat (usec): min=25, max=67689, avg=2205.73, stdev=4604.16 00:23:49.993 clat (msec): min=49, max=314, avg=140.55, stdev=58.96 00:23:49.993 lat (msec): min=50, max=314, avg=142.76, stdev=59.68 00:23:49.993 clat percentiles (msec): 00:23:49.993 | 1.00th=[ 51], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 92], 00:23:49.993 | 30.00th=[ 101], 40.00th=[ 128], 50.00th=[ 136], 60.00th=[ 142], 00:23:49.993 | 70.00th=[ 165], 80.00th=[ 192], 90.00th=[ 224], 95.00th=[ 245], 00:23:49.993 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 313], 99.95th=[ 313], 00:23:49.993 | 99.99th=[ 313] 00:23:49.993 bw ( KiB/s): min=53248, max=256512, per=7.74%, avg=114124.80, stdev=50172.40, samples=20 00:23:49.993 iops : min= 208, max= 1002, avg=445.80, stdev=195.99, samples=20 00:23:49.993 lat (msec) : 50=0.15%, 100=29.86%, 250=65.69%, 500=4.29% 00:23:49.993 cpu : usr=1.30%, sys=1.30%, ctx=1180, majf=0, minf=1 00:23:49.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:49.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.993 issued rwts: total=0,4521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.993 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.993 job1: (groupid=0, jobs=1): err= 0: pid=4190355: Fri Jul 12 17:34:28 2024 00:23:49.993 write: IOPS=466, BW=117MiB/s (122MB/s)(1185MiB/10147msec); 0 zone resets 00:23:49.993 slat (usec): min=27, max=73509, avg=1956.13, stdev=4540.52 00:23:49.993 clat (usec): min=1406, max=317639, avg=135007.91, stdev=66568.59 00:23:49.993 lat (usec): min=1447, max=332321, avg=136964.04, stdev=67436.26 00:23:49.993 clat percentiles (msec): 00:23:49.993 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 35], 20.00th=[ 87], 00:23:49.993 | 30.00th=[ 105], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 150], 00:23:49.993 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 226], 95.00th=[ 253], 00:23:49.993 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 317], 99.95th=[ 317], 00:23:49.993 | 99.99th=[ 317] 00:23:49.993 bw ( KiB/s): min=59392, max=218112, per=8.12%, avg=119654.40, stdev=42585.38, samples=20 00:23:49.993 iops : min= 232, max= 852, avg=467.40, stdev=166.35, samples=20 00:23:49.993 lat (msec) : 2=0.08%, 4=0.34%, 10=1.82%, 20=3.63%, 50=6.25% 00:23:49.993 lat (msec) : 100=17.10%, 250=65.28%, 500=5.51% 00:23:49.993 cpu : usr=1.19%, sys=1.03%, ctx=1806, majf=0, minf=1 00:23:49.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:49.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.993 issued rwts: total=0,4738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.993 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.993 job2: (groupid=0, jobs=1): err= 0: pid=4190357: Fri Jul 12 17:34:28 2024 00:23:49.993 write: IOPS=507, BW=127MiB/s (133MB/s)(1287MiB/10148msec); 0 zone resets 00:23:49.993 slat (usec): min=27, max=74712, avg=1727.23, stdev=3580.06 00:23:49.993 clat (msec): min=2, max=313, avg=124.35, stdev=40.71 00:23:49.993 lat (msec): min=2, max=313, avg=126.08, stdev=41.23 00:23:49.993 clat percentiles (msec): 00:23:49.993 | 1.00th=[ 15], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 93], 00:23:49.993 | 30.00th=[ 118], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 136], 00:23:49.993 | 70.00th=[ 144], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 174], 00:23:49.993 | 99.00th=[ 213], 99.50th=[ 251], 99.90th=[ 305], 99.95th=[ 305], 00:23:49.993 | 99.99th=[ 313] 00:23:49.993 bw ( KiB/s): min=94208, max=247808, per=8.83%, avg=130176.00, stdev=32358.23, samples=20 00:23:49.993 iops : min= 368, max= 968, avg=508.50, stdev=126.40, samples=20 00:23:49.993 lat (msec) : 4=0.04%, 10=0.51%, 20=1.34%, 50=2.33%, 100=18.24% 00:23:49.993 lat (msec) : 250=77.12%, 500=0.43% 00:23:49.993 cpu : usr=1.30%, sys=1.52%, ctx=1945, majf=0, minf=1 00:23:49.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:49.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.993 issued rwts: total=0,5148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.994 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.994 job3: (groupid=0, jobs=1): err= 0: pid=4190358: Fri Jul 12 17:34:28 2024 00:23:49.994 write: IOPS=546, BW=137MiB/s (143MB/s)(1377MiB/10076msec); 0 zone resets 00:23:49.994 slat (usec): min=19, max=14382, avg=1595.95, stdev=3243.70 00:23:49.994 clat (msec): min=4, max=187, avg=115.47, stdev=37.70 00:23:49.994 lat (msec): min=4, max=187, avg=117.07, stdev=38.27 00:23:49.994 clat percentiles (msec): 00:23:49.994 | 1.00th=[ 18], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 74], 00:23:49.994 | 30.00th=[ 101], 40.00th=[ 124], 50.00th=[ 130], 60.00th=[ 133], 00:23:49.994 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 165], 00:23:49.994 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 186], 99.95th=[ 188], 00:23:49.994 | 99.99th=[ 188] 00:23:49.994 bw ( KiB/s): min=100352, max=240128, per=9.46%, avg=139366.40, stdev=37482.57, samples=20 00:23:49.994 iops : min= 392, max= 938, avg=544.40, stdev=146.42, samples=20 00:23:49.994 lat (msec) : 10=0.29%, 20=1.00%, 50=4.25%, 100=24.79%, 250=69.67% 00:23:49.994 cpu : usr=1.14%, sys=1.63%, ctx=2130, majf=0, minf=1 00:23:49.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:49.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.994 issued rwts: total=0,5507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.994 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.994 job4: (groupid=0, jobs=1): err= 0: pid=4190359: Fri Jul 12 17:34:28 2024 00:23:49.994 write: IOPS=474, BW=119MiB/s (124MB/s)(1195MiB/10083msec); 0 zone resets 00:23:49.994 slat (usec): min=23, max=247873, avg=1801.79, stdev=7752.49 00:23:49.994 clat (usec): min=1526, max=458202, avg=133089.47, stdev=70159.78 00:23:49.994 lat (usec): min=1609, max=458251, avg=134891.26, stdev=71057.46 00:23:49.994 clat percentiles (msec): 00:23:49.994 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 38], 20.00th=[ 68], 00:23:49.994 | 30.00th=[ 100], 40.00th=[ 124], 50.00th=[ 132], 60.00th=[ 144], 00:23:49.994 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 224], 95.00th=[ 271], 00:23:49.994 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 418], 99.95th=[ 418], 00:23:49.994 | 99.99th=[ 460] 00:23:49.994 bw ( KiB/s): min=54380, max=284672, per=8.20%, avg=120786.20, stdev=51668.76, samples=20 00:23:49.994 iops : min= 212, max= 1112, avg=471.80, stdev=201.86, samples=20 00:23:49.994 lat (msec) : 2=0.04%, 4=0.17%, 10=1.17%, 20=1.88%, 50=11.06% 00:23:49.994 lat (msec) : 100=15.96%, 250=62.18%, 500=7.53% 00:23:49.994 cpu : usr=0.98%, sys=1.49%, ctx=2054, majf=0, minf=1 00:23:49.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:49.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.994 issued rwts: total=0,4781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.994 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.994 job5: (groupid=0, jobs=1): err= 0: pid=4190360: Fri Jul 12 17:34:28 2024 00:23:49.994 write: IOPS=531, BW=133MiB/s (139MB/s)(1348MiB/10147msec); 0 zone resets 00:23:49.994 slat (usec): min=24, max=32661, avg=1733.76, stdev=3330.10 00:23:49.994 clat (usec): min=1738, max=309311, avg=118300.62, stdev=39115.69 00:23:49.994 lat (msec): min=2, max=309, avg=120.03, stdev=39.62 00:23:49.994 clat percentiles (msec): 00:23:49.994 | 1.00th=[ 13], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 91], 00:23:49.994 | 30.00th=[ 120], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 131], 00:23:49.994 | 70.00th=[ 134], 80.00th=[ 146], 90.00th=[ 157], 95.00th=[ 167], 00:23:49.994 | 99.00th=[ 180], 99.50th=[ 236], 99.90th=[ 300], 99.95th=[ 300], 00:23:49.994 | 99.99th=[ 309] 00:23:49.994 bw ( KiB/s): min=106496, max=305152, per=9.25%, avg=136396.80, stdev=44059.38, samples=20 00:23:49.994 iops : min= 416, max= 1192, avg=532.80, stdev=172.11, samples=20 00:23:49.994 lat (msec) : 2=0.02%, 4=0.06%, 10=0.72%, 20=0.91%, 50=4.40% 00:23:49.994 lat (msec) : 100=15.76%, 250=77.73%, 500=0.41% 00:23:49.994 cpu : usr=1.53%, sys=1.64%, ctx=1814, majf=0, minf=1 00:23:49.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:49.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.994 issued rwts: total=0,5392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.994 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.994 job6: (groupid=0, jobs=1): err= 0: pid=4190361: Fri Jul 12 17:34:28 2024 00:23:49.995 write: IOPS=654, BW=164MiB/s (172MB/s)(1651MiB/10086msec); 0 zone resets 00:23:49.995 slat (usec): min=18, max=103549, avg=1293.44, stdev=3469.57 00:23:49.995 clat (msec): min=3, max=329, avg=96.41, stdev=63.65 00:23:49.995 lat (msec): min=3, max=329, avg=97.70, stdev=64.44 00:23:49.995 clat percentiles (msec): 00:23:49.995 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 39], 00:23:49.995 | 30.00th=[ 45], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 97], 00:23:49.995 | 70.00th=[ 126], 80.00th=[ 150], 90.00th=[ 186], 95.00th=[ 224], 00:23:49.995 | 99.00th=[ 288], 99.50th=[ 317], 99.90th=[ 330], 99.95th=[ 330], 00:23:49.995 | 99.99th=[ 330] 00:23:49.995 bw ( KiB/s): min=57344, max=419328, per=11.36%, avg=167475.20, stdev=98826.11, samples=20 00:23:49.995 iops : min= 224, max= 1638, avg=654.20, stdev=386.04, samples=20 00:23:49.995 lat (msec) : 4=0.12%, 10=0.65%, 20=0.74%, 50=29.83%, 100=31.25% 00:23:49.995 lat (msec) : 250=34.43%, 500=2.98% 00:23:49.995 cpu : usr=1.55%, sys=1.96%, ctx=2438, majf=0, minf=1 00:23:49.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:49.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.995 issued rwts: total=0,6605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.995 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.995 job7: (groupid=0, jobs=1): err= 0: pid=4190362: Fri Jul 12 17:34:28 2024 00:23:49.995 write: IOPS=472, BW=118MiB/s (124MB/s)(1196MiB/10121msec); 0 zone resets 00:23:49.995 slat (usec): min=23, max=94582, avg=1696.76, stdev=4321.54 00:23:49.995 clat (usec): min=1863, max=328080, avg=133576.57, stdev=58560.04 00:23:49.995 lat (msec): min=2, max=328, avg=135.27, stdev=59.46 00:23:49.995 clat percentiles (msec): 00:23:49.995 | 1.00th=[ 7], 5.00th=[ 33], 10.00th=[ 62], 20.00th=[ 89], 00:23:49.995 | 30.00th=[ 117], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 140], 00:23:49.995 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 203], 95.00th=[ 247], 00:23:49.995 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 330], 99.95th=[ 330], 00:23:49.995 | 99.99th=[ 330] 00:23:49.995 bw ( KiB/s): min=45056, max=197120, per=8.20%, avg=120883.20, stdev=36163.79, samples=20 00:23:49.995 iops : min= 176, max= 770, avg=472.20, stdev=141.26, samples=20 00:23:49.995 lat (msec) : 2=0.02%, 4=0.29%, 10=1.25%, 20=1.53%, 50=4.12% 00:23:49.995 lat (msec) : 100=17.72%, 250=70.45%, 500=4.62% 00:23:49.995 cpu : usr=1.12%, sys=1.32%, ctx=2339, majf=0, minf=1 00:23:49.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:49.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.995 issued rwts: total=0,4785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.995 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.995 job8: (groupid=0, jobs=1): err= 0: pid=4190363: Fri Jul 12 17:34:28 2024 00:23:49.995 write: IOPS=490, BW=123MiB/s (129MB/s)(1238MiB/10082msec); 0 zone resets 00:23:49.995 slat (usec): min=20, max=92935, avg=1657.03, stdev=4443.61 00:23:49.995 clat (usec): min=1129, max=310159, avg=128648.50, stdev=61509.82 00:23:49.995 lat (usec): min=1185, max=310226, avg=130305.53, stdev=62366.85 00:23:49.995 clat percentiles (msec): 00:23:49.995 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 86], 00:23:49.995 | 30.00th=[ 101], 40.00th=[ 123], 50.00th=[ 129], 60.00th=[ 132], 00:23:49.995 | 70.00th=[ 140], 80.00th=[ 167], 90.00th=[ 220], 95.00th=[ 243], 00:23:49.995 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 309], 99.95th=[ 309], 00:23:49.995 | 99.99th=[ 309] 00:23:49.995 bw ( KiB/s): min=57344, max=173568, per=8.49%, avg=125121.05, stdev=33294.42, samples=20 00:23:49.995 iops : min= 224, max= 678, avg=488.75, stdev=130.05, samples=20 00:23:49.995 lat (msec) : 2=0.20%, 4=0.44%, 10=1.15%, 20=2.02%, 50=7.43% 00:23:49.995 lat (msec) : 100=18.87%, 250=66.24%, 500=3.64% 00:23:49.995 cpu : usr=1.29%, sys=1.45%, ctx=2375, majf=0, minf=1 00:23:49.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:23:49.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.995 issued rwts: total=0,4950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.995 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.995 job9: (groupid=0, jobs=1): err= 0: pid=4190366: Fri Jul 12 17:34:28 2024 00:23:49.995 write: IOPS=608, BW=152MiB/s (160MB/s)(1534MiB/10084msec); 0 zone resets 00:23:49.995 slat (usec): min=23, max=12455, avg=1534.37, stdev=2999.85 00:23:49.995 clat (msec): min=7, max=179, avg=103.59, stdev=40.02 00:23:49.995 lat (msec): min=7, max=179, avg=105.13, stdev=40.55 00:23:49.995 clat percentiles (msec): 00:23:49.995 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 68], 00:23:49.995 | 30.00th=[ 73], 40.00th=[ 91], 50.00th=[ 109], 60.00th=[ 127], 00:23:49.995 | 70.00th=[ 133], 80.00th=[ 140], 90.00th=[ 155], 95.00th=[ 163], 00:23:49.995 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 180], 99.95th=[ 180], 00:23:49.995 | 99.99th=[ 180] 00:23:49.995 bw ( KiB/s): min=102400, max=353792, per=10.55%, avg=155494.40, stdev=63013.58, samples=20 00:23:49.996 iops : min= 400, max= 1382, avg=607.40, stdev=246.15, samples=20 00:23:49.996 lat (msec) : 10=0.03%, 20=0.33%, 50=11.73%, 100=33.47%, 250=54.44% 00:23:49.996 cpu : usr=1.55%, sys=1.94%, ctx=1859, majf=0, minf=1 00:23:49.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:49.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.996 issued rwts: total=0,6137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.996 job10: (groupid=0, jobs=1): err= 0: pid=4190367: Fri Jul 12 17:34:28 2024 00:23:49.996 write: IOPS=581, BW=145MiB/s (152MB/s)(1465MiB/10085msec); 0 zone resets 00:23:49.996 slat (usec): min=18, max=67413, avg=1345.60, stdev=3389.39 00:23:49.996 clat (usec): min=1240, max=269185, avg=108759.79, stdev=56207.31 00:23:49.996 lat (usec): min=1302, max=269246, avg=110105.40, stdev=56952.63 00:23:49.996 clat percentiles (msec): 00:23:49.996 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 42], 20.00th=[ 63], 00:23:49.996 | 30.00th=[ 68], 40.00th=[ 92], 50.00th=[ 102], 60.00th=[ 120], 00:23:49.996 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 197], 95.00th=[ 220], 00:23:49.996 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 266], 99.95th=[ 266], 00:23:49.996 | 99.99th=[ 271] 00:23:49.996 bw ( KiB/s): min=81920, max=245760, per=10.07%, avg=148403.20, stdev=47742.97, samples=20 00:23:49.996 iops : min= 320, max= 960, avg=579.70, stdev=186.50, samples=20 00:23:49.996 lat (msec) : 2=0.09%, 4=0.29%, 10=1.50%, 20=2.80%, 50=7.59% 00:23:49.996 lat (msec) : 100=36.47%, 250=50.77%, 500=0.49% 00:23:49.996 cpu : usr=1.52%, sys=2.00%, ctx=2799, majf=0, minf=1 00:23:49.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:49.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:49.996 issued rwts: total=0,5860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:49.996 00:23:49.996 Run status group 0 (all jobs): 00:23:49.996 WRITE: bw=1439MiB/s (1509MB/s), 112MiB/s-164MiB/s (117MB/s-172MB/s), io=14.3GiB (15.3GB), run=10076-10148msec 00:23:49.996 00:23:49.996 Disk stats (read/write): 00:23:49.996 nvme0n1: ios=41/8967, merge=0/0, ticks=932/1219999, in_queue=1220931, util=100.00% 00:23:49.996 nvme10n1: ios=45/9386, merge=0/0, ticks=828/1220777, in_queue=1221605, util=100.00% 00:23:49.996 nvme1n1: ios=42/10212, merge=0/0, ticks=856/1221667, in_queue=1222523, util=100.00% 00:23:49.996 nvme2n1: ios=0/10974, merge=0/0, ticks=0/1230288, in_queue=1230288, util=96.92% 00:23:49.996 nvme3n1: ios=49/9512, merge=0/0, ticks=4826/1139082, in_queue=1143908, util=100.00% 00:23:49.996 nvme4n1: ios=44/10694, merge=0/0, ticks=1035/1218715, in_queue=1219750, util=100.00% 00:23:49.996 nvme5n1: ios=0/13153, merge=0/0, ticks=0/1230258, in_queue=1230258, util=97.79% 00:23:49.996 nvme6n1: ios=41/9510, merge=0/0, ticks=870/1229327, in_queue=1230197, util=100.00% 00:23:49.996 nvme7n1: ios=31/9847, merge=0/0, ticks=234/1232296, in_queue=1232530, util=100.00% 00:23:49.996 nvme8n1: ios=0/12218, merge=0/0, ticks=0/1226683, in_queue=1226683, util=98.86% 00:23:49.996 nvme9n1: ios=28/11667, merge=0/0, ticks=117/1233412, in_queue=1233529, util=99.65% 00:23:49.996 17:34:28 -- target/multiconnection.sh@36 -- # sync 00:23:49.996 17:34:28 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:49.996 17:34:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.996 17:34:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:49.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:49.996 17:34:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:49.996 17:34:28 -- common/autotest_common.sh@1198 -- # local i=0 00:23:49.996 17:34:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:49.996 17:34:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:23:49.996 17:34:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:49.996 17:34:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:49.996 17:34:28 -- common/autotest_common.sh@1210 -- # return 0 00:23:49.996 17:34:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.996 17:34:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:49.996 17:34:28 -- common/autotest_common.sh@10 -- # set +x 00:23:49.996 17:34:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:49.996 17:34:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.996 17:34:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:50.563 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:50.563 17:34:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:50.564 17:34:29 -- common/autotest_common.sh@1198 -- # local i=0 00:23:50.564 17:34:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:23:50.564 17:34:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:50.564 17:34:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:50.564 17:34:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:50.564 17:34:29 -- common/autotest_common.sh@1210 -- # return 0 00:23:50.564 17:34:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:50.564 17:34:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.564 17:34:29 -- common/autotest_common.sh@10 -- # set +x 00:23:50.564 17:34:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.564 17:34:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.564 17:34:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:50.822 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:50.822 17:34:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:50.822 17:34:29 -- common/autotest_common.sh@1198 -- # local i=0 00:23:50.822 17:34:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:23:50.822 17:34:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:50.822 17:34:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:50.822 17:34:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:50.822 17:34:29 -- common/autotest_common.sh@1210 -- # return 0 00:23:50.822 17:34:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:50.822 17:34:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.822 17:34:29 -- common/autotest_common.sh@10 -- # set +x 00:23:50.822 17:34:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.822 17:34:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.822 17:34:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:51.390 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:51.390 17:34:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:51.390 17:34:30 -- common/autotest_common.sh@1198 -- # local i=0 00:23:51.390 17:34:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:51.390 17:34:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:23:51.390 17:34:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:51.390 17:34:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:51.390 17:34:30 -- common/autotest_common.sh@1210 -- # return 0 00:23:51.390 17:34:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:51.390 17:34:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:51.390 17:34:30 -- common/autotest_common.sh@10 -- # set +x 00:23:51.390 17:34:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:51.390 17:34:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.390 17:34:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:51.390 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:51.390 17:34:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:51.390 17:34:30 -- common/autotest_common.sh@1198 -- # local i=0 00:23:51.390 17:34:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:51.390 17:34:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:23:51.391 17:34:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:51.391 17:34:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:51.391 17:34:30 -- common/autotest_common.sh@1210 -- # return 0 00:23:51.391 17:34:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:51.391 17:34:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:51.391 17:34:30 -- common/autotest_common.sh@10 -- # set +x 00:23:51.391 17:34:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:51.391 17:34:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.391 17:34:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:51.650 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:51.650 17:34:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:51.650 17:34:30 -- common/autotest_common.sh@1198 -- # local i=0 00:23:51.650 17:34:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:51.650 17:34:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:23:51.650 17:34:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:51.650 17:34:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:51.650 17:34:30 -- common/autotest_common.sh@1210 -- # return 0 00:23:51.650 17:34:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:51.650 17:34:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:51.650 17:34:30 -- common/autotest_common.sh@10 -- # set +x 00:23:51.650 17:34:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:51.650 17:34:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.650 17:34:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:51.908 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:51.908 17:34:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:51.908 17:34:30 -- common/autotest_common.sh@1198 -- # local i=0 00:23:51.908 17:34:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:51.908 17:34:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:51.908 17:34:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:51.908 17:34:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:51.908 17:34:30 -- common/autotest_common.sh@1210 -- # return 0 00:23:51.908 17:34:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:51.908 17:34:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:51.908 17:34:30 -- common/autotest_common.sh@10 -- # set +x 00:23:51.908 17:34:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:51.908 17:34:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.908 17:34:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:52.167 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:52.167 17:34:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:52.167 17:34:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:52.167 17:34:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:52.167 17:34:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:52.167 17:34:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:52.167 17:34:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:52.167 17:34:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:52.167 17:34:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:52.167 17:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:52.167 17:34:31 -- common/autotest_common.sh@10 -- # set +x 00:23:52.167 17:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:52.167 17:34:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.167 17:34:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:52.425 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:52.425 17:34:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:52.425 17:34:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:52.425 17:34:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:52.425 17:34:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:52.425 17:34:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:52.425 17:34:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:52.425 17:34:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:52.425 17:34:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:52.425 17:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:52.425 17:34:31 -- common/autotest_common.sh@10 -- # set +x 00:23:52.425 17:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:52.425 17:34:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.425 17:34:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:52.425 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:52.425 17:34:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:52.425 17:34:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:52.425 17:34:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:52.425 17:34:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:52.683 17:34:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:52.683 17:34:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:52.683 17:34:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:52.683 17:34:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:52.683 17:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:52.683 17:34:31 -- common/autotest_common.sh@10 -- # set +x 00:23:52.683 17:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:52.683 17:34:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.683 17:34:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:52.683 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:52.683 17:34:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:52.683 17:34:31 -- common/autotest_common.sh@1198 -- # local i=0 00:23:52.683 17:34:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:52.684 17:34:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:52.684 17:34:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:52.684 17:34:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:52.684 17:34:31 -- common/autotest_common.sh@1210 -- # return 0 00:23:52.684 17:34:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:52.684 17:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:52.684 17:34:31 -- common/autotest_common.sh@10 -- # set +x 00:23:52.684 17:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:52.684 17:34:31 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:52.684 17:34:31 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:52.684 17:34:31 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:52.684 17:34:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:52.684 17:34:31 -- nvmf/common.sh@116 -- # sync 00:23:52.684 17:34:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:52.684 17:34:31 -- nvmf/common.sh@119 -- # set +e 00:23:52.684 17:34:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:52.684 17:34:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:52.684 rmmod nvme_tcp 00:23:52.684 rmmod nvme_fabrics 00:23:52.684 rmmod nvme_keyring 00:23:52.684 17:34:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:52.684 17:34:31 -- nvmf/common.sh@123 -- # set -e 00:23:52.684 17:34:31 -- nvmf/common.sh@124 -- # return 0 00:23:52.684 17:34:31 -- nvmf/common.sh@477 -- # '[' -n 4180934 ']' 00:23:52.684 17:34:31 -- nvmf/common.sh@478 -- # killprocess 4180934 00:23:52.684 17:34:31 -- common/autotest_common.sh@926 -- # '[' -z 4180934 ']' 00:23:52.684 17:34:31 -- common/autotest_common.sh@930 -- # kill -0 4180934 00:23:52.684 17:34:31 -- common/autotest_common.sh@931 -- # uname 00:23:52.684 17:34:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:52.684 17:34:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4180934 00:23:52.942 17:34:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:52.942 17:34:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:52.942 17:34:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4180934' 00:23:52.942 killing process with pid 4180934 00:23:52.942 17:34:31 -- common/autotest_common.sh@945 -- # kill 4180934 00:23:52.942 17:34:31 -- common/autotest_common.sh@950 -- # wait 4180934 00:23:53.200 17:34:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:53.200 17:34:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:53.200 17:34:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:53.200 17:34:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.200 17:34:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:53.200 17:34:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.200 17:34:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.200 17:34:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.733 17:34:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:55.733 00:23:55.733 real 1m13.412s 00:23:55.733 user 4m35.273s 00:23:55.733 sys 0m22.848s 00:23:55.733 17:34:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.733 17:34:34 -- common/autotest_common.sh@10 -- # set +x 00:23:55.733 ************************************ 00:23:55.733 END TEST nvmf_multiconnection 00:23:55.733 ************************************ 00:23:55.733 17:34:34 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:55.733 17:34:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:55.733 17:34:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:55.733 17:34:34 -- common/autotest_common.sh@10 -- # set +x 00:23:55.733 ************************************ 00:23:55.733 START TEST nvmf_initiator_timeout 00:23:55.733 ************************************ 00:23:55.733 17:34:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:55.733 * Looking for test storage... 00:23:55.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:55.733 17:34:34 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.733 17:34:34 -- nvmf/common.sh@7 -- # uname -s 00:23:55.733 17:34:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.733 17:34:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.733 17:34:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.733 17:34:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.733 17:34:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.733 17:34:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.733 17:34:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.733 17:34:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.733 17:34:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.734 17:34:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.734 17:34:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:55.734 17:34:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:55.734 17:34:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.734 17:34:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.734 17:34:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.734 17:34:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.734 17:34:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.734 17:34:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.734 17:34:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.734 17:34:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.734 17:34:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.734 17:34:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.734 17:34:34 -- paths/export.sh@5 -- # export PATH 00:23:55.734 17:34:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.734 17:34:34 -- nvmf/common.sh@46 -- # : 0 00:23:55.734 17:34:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:55.734 17:34:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:55.734 17:34:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:55.734 17:34:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.734 17:34:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.734 17:34:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:55.734 17:34:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:55.734 17:34:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:55.734 17:34:34 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.734 17:34:34 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.734 17:34:34 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:55.734 17:34:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:55.734 17:34:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.734 17:34:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:55.734 17:34:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:55.734 17:34:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:55.734 17:34:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.734 17:34:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.734 17:34:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.734 17:34:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:55.734 17:34:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:55.734 17:34:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:55.734 17:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:01.010 17:34:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:01.010 17:34:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:01.010 17:34:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:01.010 17:34:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:01.010 17:34:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:01.010 17:34:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:01.010 17:34:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:01.010 17:34:39 -- nvmf/common.sh@294 -- # net_devs=() 00:24:01.010 17:34:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:01.010 17:34:39 -- nvmf/common.sh@295 -- # e810=() 00:24:01.010 17:34:39 -- nvmf/common.sh@295 -- # local -ga e810 00:24:01.010 17:34:39 -- nvmf/common.sh@296 -- # x722=() 00:24:01.010 17:34:39 -- nvmf/common.sh@296 -- # local -ga x722 00:24:01.010 17:34:39 -- nvmf/common.sh@297 -- # mlx=() 00:24:01.010 17:34:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:01.010 17:34:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.010 17:34:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:01.010 17:34:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:01.010 17:34:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:01.010 17:34:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:01.010 17:34:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:01.010 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:01.010 17:34:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:01.010 17:34:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:01.010 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:01.010 17:34:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:01.010 17:34:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:01.010 17:34:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.010 17:34:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:01.010 17:34:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.010 17:34:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:01.010 Found net devices under 0000:af:00.0: cvl_0_0 00:24:01.010 17:34:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.010 17:34:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:01.010 17:34:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.010 17:34:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:01.010 17:34:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.010 17:34:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:01.010 Found net devices under 0000:af:00.1: cvl_0_1 00:24:01.010 17:34:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.010 17:34:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:01.010 17:34:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:01.010 17:34:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:01.010 17:34:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.010 17:34:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.010 17:34:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.010 17:34:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:01.010 17:34:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.010 17:34:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.010 17:34:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:01.010 17:34:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.010 17:34:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.010 17:34:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:01.010 17:34:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:01.010 17:34:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.010 17:34:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.010 17:34:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.010 17:34:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.010 17:34:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:01.010 17:34:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.010 17:34:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.010 17:34:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.010 17:34:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:01.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:24:01.010 00:24:01.010 --- 10.0.0.2 ping statistics --- 00:24:01.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.010 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:24:01.010 17:34:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:24:01.010 00:24:01.010 --- 10.0.0.1 ping statistics --- 00:24:01.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.010 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:01.010 17:34:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.010 17:34:39 -- nvmf/common.sh@410 -- # return 0 00:24:01.010 17:34:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:01.010 17:34:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.010 17:34:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:01.010 17:34:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.010 17:34:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:01.011 17:34:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:01.011 17:34:39 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:01.011 17:34:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:01.011 17:34:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:01.011 17:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:01.011 17:34:39 -- nvmf/common.sh@469 -- # nvmfpid=2449 00:24:01.011 17:34:39 -- nvmf/common.sh@470 -- # waitforlisten 2449 00:24:01.011 17:34:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:01.011 17:34:39 -- common/autotest_common.sh@819 -- # '[' -z 2449 ']' 00:24:01.011 17:34:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.011 17:34:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:01.011 17:34:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.011 17:34:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:01.011 17:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:01.011 [2024-07-12 17:34:39.921552] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:01.011 [2024-07-12 17:34:39.921607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.011 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.269 [2024-07-12 17:34:40.008441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.269 [2024-07-12 17:34:40.054075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:01.269 [2024-07-12 17:34:40.054229] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.269 [2024-07-12 17:34:40.054242] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.269 [2024-07-12 17:34:40.054252] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.269 [2024-07-12 17:34:40.054302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.269 [2024-07-12 17:34:40.054405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.269 [2024-07-12 17:34:40.054486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.269 [2024-07-12 17:34:40.054489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.219 17:34:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:02.219 17:34:40 -- common/autotest_common.sh@852 -- # return 0 00:24:02.219 17:34:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:02.219 17:34:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:02.219 17:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 17:34:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.219 17:34:40 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:02.219 17:34:40 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.219 17:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.219 17:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 Malloc0 00:24:02.219 17:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.219 17:34:40 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:02.219 17:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.219 17:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 Delay0 00:24:02.219 17:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.219 17:34:40 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.219 17:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.219 17:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 [2024-07-12 17:34:40.931549] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.219 17:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.219 17:34:40 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:02.219 17:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.219 17:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 17:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.219 17:34:40 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:02.219 17:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.219 17:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 17:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.219 17:34:40 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.219 17:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.219 17:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:02.219 [2024-07-12 17:34:40.963840] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.219 17:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.219 17:34:40 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:03.592 17:34:42 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:03.592 17:34:42 -- common/autotest_common.sh@1177 -- # local i=0 00:24:03.592 17:34:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:03.592 17:34:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:03.592 17:34:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:05.493 17:34:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:05.493 17:34:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:05.493 17:34:44 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:24:05.493 17:34:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:05.493 17:34:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:05.493 17:34:44 -- common/autotest_common.sh@1187 -- # return 0 00:24:05.493 17:34:44 -- target/initiator_timeout.sh@35 -- # fio_pid=3425 00:24:05.493 17:34:44 -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:05.493 17:34:44 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:05.493 [global] 00:24:05.493 thread=1 00:24:05.493 invalidate=1 00:24:05.493 rw=write 00:24:05.493 time_based=1 00:24:05.493 runtime=60 00:24:05.493 ioengine=libaio 00:24:05.493 direct=1 00:24:05.493 bs=4096 00:24:05.493 iodepth=1 00:24:05.493 norandommap=0 00:24:05.493 numjobs=1 00:24:05.493 00:24:05.493 verify_dump=1 00:24:05.493 verify_backlog=512 00:24:05.493 verify_state_save=0 00:24:05.493 do_verify=1 00:24:05.493 verify=crc32c-intel 00:24:05.493 [job0] 00:24:05.493 filename=/dev/nvme0n1 00:24:05.493 Could not set queue depth (nvme0n1) 00:24:05.752 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:05.752 fio-3.35 00:24:05.752 Starting 1 thread 00:24:09.097 17:34:47 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:09.097 17:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.097 17:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 true 00:24:09.097 17:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.097 17:34:47 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:09.097 17:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.097 17:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 true 00:24:09.097 17:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.097 17:34:47 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:09.097 17:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.097 17:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 true 00:24:09.097 17:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.097 17:34:47 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:09.097 17:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.097 17:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 true 00:24:09.097 17:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.097 17:34:47 -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:11.629 17:34:50 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:11.629 17:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.629 17:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:11.629 true 00:24:11.629 17:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.629 17:34:50 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:11.629 17:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.629 17:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:11.629 true 00:24:11.629 17:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.629 17:34:50 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:11.629 17:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.629 17:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:11.629 true 00:24:11.629 17:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.629 17:34:50 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:11.629 17:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.629 17:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:11.629 true 00:24:11.629 17:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.629 17:34:50 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:11.629 17:34:50 -- target/initiator_timeout.sh@54 -- # wait 3425 00:25:07.855 00:25:07.855 job0: (groupid=0, jobs=1): err= 0: pid=3565: Fri Jul 12 17:35:44 2024 00:25:07.855 read: IOPS=173, BW=693KiB/s (710kB/s)(40.6MiB/60016msec) 00:25:07.855 slat (usec): min=6, max=13481, avg=10.04, stdev=152.83 00:25:07.855 clat (usec): min=252, max=41525k, avg=5512.31, stdev=407168.69 00:25:07.855 lat (usec): min=269, max=41525k, avg=5522.36, stdev=407168.87 00:25:07.855 clat percentiles (usec): 00:25:07.855 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:25:07.855 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:25:07.855 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 445], 95.00th=[ 498], 00:25:07.855 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:07.855 | 99.99th=[42730] 00:25:07.855 write: IOPS=179, BW=717KiB/s (734kB/s)(42.0MiB/60016msec); 0 zone resets 00:25:07.855 slat (nsec): min=8796, max=71417, avg=10692.46, stdev=1481.57 00:25:07.855 clat (usec): min=169, max=2231, avg=223.43, stdev=33.68 00:25:07.855 lat (usec): min=179, max=2242, avg=234.13, stdev=33.94 00:25:07.855 clat percentiles (usec): 00:25:07.855 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:25:07.855 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:25:07.855 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 269], 00:25:07.855 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 383], 99.95th=[ 433], 00:25:07.855 | 99.99th=[ 490] 00:25:07.855 bw ( KiB/s): min= 112, max= 8192, per=100.00%, avg=5734.40, stdev=2386.77, samples=15 00:25:07.855 iops : min= 28, max= 2048, avg=1433.60, stdev=596.69, samples=15 00:25:07.855 lat (usec) : 250=45.65%, 500=52.02%, 750=0.88% 00:25:07.855 lat (msec) : 4=0.01%, 50=1.43%, >=2000=0.01% 00:25:07.855 cpu : usr=0.18%, sys=0.35%, ctx=21159, majf=0, minf=2 00:25:07.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:07.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.855 issued rwts: total=10403,10752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:07.855 00:25:07.855 Run status group 0 (all jobs): 00:25:07.855 READ: bw=693KiB/s (710kB/s), 693KiB/s-693KiB/s (710kB/s-710kB/s), io=40.6MiB (42.6MB), run=60016-60016msec 00:25:07.855 WRITE: bw=717KiB/s (734kB/s), 717KiB/s-717KiB/s (734kB/s-734kB/s), io=42.0MiB (44.0MB), run=60016-60016msec 00:25:07.855 00:25:07.855 Disk stats (read/write): 00:25:07.855 nvme0n1: ios=10499/10752, merge=0/0, ticks=16861/2362, in_queue=19223, util=99.60% 00:25:07.855 17:35:44 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:07.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:07.856 17:35:44 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:07.856 17:35:44 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.856 17:35:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.856 17:35:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:07.856 17:35:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.856 17:35:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:07.856 17:35:44 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.856 17:35:44 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:07.856 17:35:44 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:07.856 nvmf hotplug test: fio successful as expected 00:25:07.856 17:35:44 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.856 17:35:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.856 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.856 17:35:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.856 17:35:44 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:07.856 17:35:44 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:07.856 17:35:44 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:07.856 17:35:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:07.856 17:35:44 -- nvmf/common.sh@116 -- # sync 00:25:07.856 17:35:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:07.856 17:35:44 -- nvmf/common.sh@119 -- # set +e 00:25:07.856 17:35:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:07.856 17:35:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:07.856 rmmod nvme_tcp 00:25:07.856 rmmod nvme_fabrics 00:25:07.856 rmmod nvme_keyring 00:25:07.856 17:35:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:07.856 17:35:45 -- nvmf/common.sh@123 -- # set -e 00:25:07.856 17:35:45 -- nvmf/common.sh@124 -- # return 0 00:25:07.856 17:35:45 -- nvmf/common.sh@477 -- # '[' -n 2449 ']' 00:25:07.856 17:35:45 -- nvmf/common.sh@478 -- # killprocess 2449 00:25:07.856 17:35:45 -- common/autotest_common.sh@926 -- # '[' -z 2449 ']' 00:25:07.856 17:35:45 -- common/autotest_common.sh@930 -- # kill -0 2449 00:25:07.856 17:35:45 -- common/autotest_common.sh@931 -- # uname 00:25:07.856 17:35:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:07.856 17:35:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2449 00:25:07.856 17:35:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:07.856 17:35:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:07.856 17:35:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2449' 00:25:07.856 killing process with pid 2449 00:25:07.856 17:35:45 -- common/autotest_common.sh@945 -- # kill 2449 00:25:07.856 17:35:45 -- common/autotest_common.sh@950 -- # wait 2449 00:25:07.856 17:35:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.856 17:35:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:07.856 17:35:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:07.856 17:35:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.856 17:35:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:07.856 17:35:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.856 17:35:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.856 17:35:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.424 17:35:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:08.424 00:25:08.424 real 1m13.161s 00:25:08.424 user 4m31.608s 00:25:08.424 sys 0m6.458s 00:25:08.424 17:35:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.424 17:35:47 -- common/autotest_common.sh@10 -- # set +x 00:25:08.424 ************************************ 00:25:08.424 END TEST nvmf_initiator_timeout 00:25:08.424 ************************************ 00:25:08.684 17:35:47 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:25:08.684 17:35:47 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:25:08.684 17:35:47 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:25:08.684 17:35:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:08.684 17:35:47 -- common/autotest_common.sh@10 -- # set +x 00:25:13.957 17:35:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:13.958 17:35:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:13.958 17:35:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:13.958 17:35:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:13.958 17:35:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:13.958 17:35:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:13.958 17:35:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:13.958 17:35:52 -- nvmf/common.sh@294 -- # net_devs=() 00:25:13.958 17:35:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:13.958 17:35:52 -- nvmf/common.sh@295 -- # e810=() 00:25:13.958 17:35:52 -- nvmf/common.sh@295 -- # local -ga e810 00:25:13.958 17:35:52 -- nvmf/common.sh@296 -- # x722=() 00:25:13.958 17:35:52 -- nvmf/common.sh@296 -- # local -ga x722 00:25:13.958 17:35:52 -- nvmf/common.sh@297 -- # mlx=() 00:25:13.958 17:35:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:13.958 17:35:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.958 17:35:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:13.958 17:35:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:13.958 17:35:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:13.958 17:35:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:13.958 17:35:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:13.958 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:13.958 17:35:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:13.958 17:35:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:13.958 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:13.958 17:35:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:13.958 17:35:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:13.958 17:35:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.958 17:35:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:13.958 17:35:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.958 17:35:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:13.958 Found net devices under 0000:af:00.0: cvl_0_0 00:25:13.958 17:35:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.958 17:35:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:13.958 17:35:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.958 17:35:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:13.958 17:35:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.958 17:35:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:13.958 Found net devices under 0000:af:00.1: cvl_0_1 00:25:13.958 17:35:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.958 17:35:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:13.958 17:35:52 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.958 17:35:52 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:25:13.958 17:35:52 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:13.958 17:35:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:13.958 17:35:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:13.958 17:35:52 -- common/autotest_common.sh@10 -- # set +x 00:25:13.958 ************************************ 00:25:13.958 START TEST nvmf_perf_adq 00:25:13.958 ************************************ 00:25:13.958 17:35:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:13.958 * Looking for test storage... 00:25:13.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:13.958 17:35:52 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.958 17:35:52 -- nvmf/common.sh@7 -- # uname -s 00:25:13.958 17:35:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.958 17:35:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.958 17:35:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.958 17:35:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.958 17:35:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.958 17:35:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.958 17:35:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.958 17:35:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.958 17:35:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.958 17:35:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.958 17:35:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:13.958 17:35:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:13.958 17:35:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.958 17:35:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.958 17:35:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.958 17:35:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.958 17:35:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.958 17:35:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.958 17:35:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.958 17:35:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.958 17:35:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.958 17:35:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.958 17:35:52 -- paths/export.sh@5 -- # export PATH 00:25:13.958 17:35:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.958 17:35:52 -- nvmf/common.sh@46 -- # : 0 00:25:13.958 17:35:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:13.958 17:35:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:13.958 17:35:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:13.958 17:35:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.958 17:35:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.958 17:35:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:13.958 17:35:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:13.958 17:35:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:13.958 17:35:52 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:13.958 17:35:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:13.958 17:35:52 -- common/autotest_common.sh@10 -- # set +x 00:25:19.230 17:35:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:19.230 17:35:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:19.230 17:35:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:19.230 17:35:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:19.230 17:35:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:19.230 17:35:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:19.230 17:35:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:19.230 17:35:57 -- nvmf/common.sh@294 -- # net_devs=() 00:25:19.230 17:35:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:19.230 17:35:57 -- nvmf/common.sh@295 -- # e810=() 00:25:19.230 17:35:57 -- nvmf/common.sh@295 -- # local -ga e810 00:25:19.230 17:35:57 -- nvmf/common.sh@296 -- # x722=() 00:25:19.230 17:35:57 -- nvmf/common.sh@296 -- # local -ga x722 00:25:19.230 17:35:57 -- nvmf/common.sh@297 -- # mlx=() 00:25:19.230 17:35:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:19.230 17:35:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.230 17:35:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:19.230 17:35:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:19.230 17:35:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:19.230 17:35:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:19.230 17:35:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:19.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:19.230 17:35:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:19.230 17:35:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:19.230 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:19.230 17:35:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:19.230 17:35:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:19.230 17:35:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:19.230 17:35:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.230 17:35:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:19.230 17:35:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.230 17:35:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:19.230 Found net devices under 0000:af:00.0: cvl_0_0 00:25:19.230 17:35:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.230 17:35:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:19.230 17:35:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.230 17:35:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:19.230 17:35:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.230 17:35:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:19.230 Found net devices under 0000:af:00.1: cvl_0_1 00:25:19.230 17:35:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.230 17:35:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:19.230 17:35:57 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.230 17:35:57 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:19.230 17:35:57 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:19.231 17:35:57 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:25:19.231 17:35:57 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:20.161 17:35:58 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:22.062 17:36:00 -- target/perf_adq.sh@54 -- # sleep 5 00:25:27.333 17:36:05 -- target/perf_adq.sh@67 -- # nvmftestinit 00:25:27.333 17:36:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:27.333 17:36:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.333 17:36:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:27.333 17:36:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:27.333 17:36:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:27.333 17:36:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.333 17:36:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.333 17:36:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.333 17:36:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:27.333 17:36:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:27.333 17:36:05 -- common/autotest_common.sh@10 -- # set +x 00:25:27.333 17:36:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:27.333 17:36:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:27.333 17:36:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:27.333 17:36:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:27.333 17:36:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:27.333 17:36:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:27.333 17:36:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:27.333 17:36:05 -- nvmf/common.sh@294 -- # net_devs=() 00:25:27.333 17:36:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:27.333 17:36:05 -- nvmf/common.sh@295 -- # e810=() 00:25:27.333 17:36:05 -- nvmf/common.sh@295 -- # local -ga e810 00:25:27.333 17:36:05 -- nvmf/common.sh@296 -- # x722=() 00:25:27.333 17:36:05 -- nvmf/common.sh@296 -- # local -ga x722 00:25:27.333 17:36:05 -- nvmf/common.sh@297 -- # mlx=() 00:25:27.333 17:36:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:27.333 17:36:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.333 17:36:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:27.333 17:36:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:27.333 17:36:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:27.333 17:36:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:27.333 17:36:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:27.333 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:27.333 17:36:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:27.333 17:36:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:27.333 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:27.333 17:36:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:27.333 17:36:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:27.333 17:36:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:27.333 17:36:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.333 17:36:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:27.333 17:36:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.333 17:36:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:27.333 Found net devices under 0000:af:00.0: cvl_0_0 00:25:27.333 17:36:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.334 17:36:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:27.334 17:36:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.334 17:36:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:27.334 17:36:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.334 17:36:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:27.334 Found net devices under 0000:af:00.1: cvl_0_1 00:25:27.334 17:36:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.334 17:36:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:27.334 17:36:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:27.334 17:36:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:27.334 17:36:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:27.334 17:36:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:27.334 17:36:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.334 17:36:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.334 17:36:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.334 17:36:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:27.334 17:36:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.334 17:36:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.334 17:36:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:27.334 17:36:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.334 17:36:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.334 17:36:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:27.334 17:36:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:27.334 17:36:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.334 17:36:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.334 17:36:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.334 17:36:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.334 17:36:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:27.334 17:36:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.334 17:36:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.334 17:36:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.334 17:36:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:27.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:25:27.334 00:25:27.334 --- 10.0.0.2 ping statistics --- 00:25:27.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.334 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:25:27.334 17:36:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:25:27.334 00:25:27.334 --- 10.0.0.1 ping statistics --- 00:25:27.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.334 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:27.334 17:36:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.334 17:36:06 -- nvmf/common.sh@410 -- # return 0 00:25:27.334 17:36:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:27.334 17:36:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.334 17:36:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:27.334 17:36:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:27.334 17:36:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.334 17:36:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:27.334 17:36:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:27.334 17:36:06 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:27.334 17:36:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:27.334 17:36:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:27.334 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.334 17:36:06 -- nvmf/common.sh@469 -- # nvmfpid=22635 00:25:27.334 17:36:06 -- nvmf/common.sh@470 -- # waitforlisten 22635 00:25:27.334 17:36:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:27.334 17:36:06 -- common/autotest_common.sh@819 -- # '[' -z 22635 ']' 00:25:27.334 17:36:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.334 17:36:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:27.334 17:36:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.334 17:36:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:27.334 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.334 [2024-07-12 17:36:06.259023] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:27.334 [2024-07-12 17:36:06.259076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.334 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.592 [2024-07-12 17:36:06.344261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:27.592 [2024-07-12 17:36:06.387081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:27.592 [2024-07-12 17:36:06.387238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.592 [2024-07-12 17:36:06.387250] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.592 [2024-07-12 17:36:06.387267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.592 [2024-07-12 17:36:06.387313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.592 [2024-07-12 17:36:06.387417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.592 [2024-07-12 17:36:06.387509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.592 [2024-07-12 17:36:06.387511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.592 17:36:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:27.592 17:36:06 -- common/autotest_common.sh@852 -- # return 0 00:25:27.592 17:36:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:27.592 17:36:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:27.592 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.592 17:36:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.592 17:36:06 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:25:27.592 17:36:06 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:27.592 17:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.592 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.592 17:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.592 17:36:06 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:27.592 17:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.592 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.850 17:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.850 17:36:06 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:27.850 17:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.850 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.850 [2024-07-12 17:36:06.614097] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.850 17:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.850 17:36:06 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:27.850 17:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.850 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.850 Malloc1 00:25:27.850 17:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.850 17:36:06 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:27.850 17:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.850 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.850 17:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.850 17:36:06 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:27.850 17:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.850 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.850 17:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.850 17:36:06 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:27.850 17:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.850 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.850 [2024-07-12 17:36:06.665951] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.850 17:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.850 17:36:06 -- target/perf_adq.sh@73 -- # perfpid=22696 00:25:27.850 17:36:06 -- target/perf_adq.sh@74 -- # sleep 2 00:25:27.850 17:36:06 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:27.850 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.752 17:36:08 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:25:29.752 17:36:08 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:29.752 17:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.752 17:36:08 -- target/perf_adq.sh@76 -- # wc -l 00:25:29.752 17:36:08 -- common/autotest_common.sh@10 -- # set +x 00:25:29.752 17:36:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.011 17:36:08 -- target/perf_adq.sh@76 -- # count=4 00:25:30.011 17:36:08 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:25:30.011 17:36:08 -- target/perf_adq.sh@81 -- # wait 22696 00:25:38.130 Initializing NVMe Controllers 00:25:38.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:38.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:38.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:38.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:38.131 Initialization complete. Launching workers. 00:25:38.131 ======================================================== 00:25:38.131 Latency(us) 00:25:38.131 Device Information : IOPS MiB/s Average min max 00:25:38.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7115.59 27.80 8996.19 3648.42 12687.01 00:25:38.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10395.52 40.61 6155.99 2147.25 9413.84 00:25:38.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8431.84 32.94 7589.90 1803.14 12172.73 00:25:38.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8454.84 33.03 7568.52 1990.80 13690.74 00:25:38.131 ======================================================== 00:25:38.131 Total : 34397.80 134.37 7442.20 1803.14 13690.74 00:25:38.131 00:25:38.131 17:36:16 -- target/perf_adq.sh@82 -- # nvmftestfini 00:25:38.131 17:36:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:38.131 17:36:16 -- nvmf/common.sh@116 -- # sync 00:25:38.131 17:36:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:38.131 17:36:16 -- nvmf/common.sh@119 -- # set +e 00:25:38.131 17:36:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:38.131 17:36:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:38.131 rmmod nvme_tcp 00:25:38.131 rmmod nvme_fabrics 00:25:38.131 rmmod nvme_keyring 00:25:38.131 17:36:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:38.131 17:36:16 -- nvmf/common.sh@123 -- # set -e 00:25:38.131 17:36:16 -- nvmf/common.sh@124 -- # return 0 00:25:38.131 17:36:16 -- nvmf/common.sh@477 -- # '[' -n 22635 ']' 00:25:38.131 17:36:16 -- nvmf/common.sh@478 -- # killprocess 22635 00:25:38.131 17:36:16 -- common/autotest_common.sh@926 -- # '[' -z 22635 ']' 00:25:38.131 17:36:16 -- common/autotest_common.sh@930 -- # kill -0 22635 00:25:38.131 17:36:16 -- common/autotest_common.sh@931 -- # uname 00:25:38.131 17:36:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:38.131 17:36:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 22635 00:25:38.131 17:36:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:38.131 17:36:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:38.131 17:36:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 22635' 00:25:38.131 killing process with pid 22635 00:25:38.131 17:36:16 -- common/autotest_common.sh@945 -- # kill 22635 00:25:38.131 17:36:16 -- common/autotest_common.sh@950 -- # wait 22635 00:25:38.390 17:36:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:38.390 17:36:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:38.390 17:36:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:38.390 17:36:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:38.390 17:36:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:38.390 17:36:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.390 17:36:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.390 17:36:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.299 17:36:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:40.299 17:36:19 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:40.299 17:36:19 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:41.676 17:36:20 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:43.651 17:36:22 -- target/perf_adq.sh@54 -- # sleep 5 00:25:48.934 17:36:27 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:48.934 17:36:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.934 17:36:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.934 17:36:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.934 17:36:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.934 17:36:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:48.934 17:36:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.934 17:36:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.934 17:36:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.934 17:36:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:48.934 17:36:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:48.934 17:36:27 -- common/autotest_common.sh@10 -- # set +x 00:25:48.934 17:36:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:48.934 17:36:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:48.934 17:36:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:48.934 17:36:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:48.934 17:36:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:48.934 17:36:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:48.934 17:36:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:48.934 17:36:27 -- nvmf/common.sh@294 -- # net_devs=() 00:25:48.934 17:36:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:48.934 17:36:27 -- nvmf/common.sh@295 -- # e810=() 00:25:48.934 17:36:27 -- nvmf/common.sh@295 -- # local -ga e810 00:25:48.934 17:36:27 -- nvmf/common.sh@296 -- # x722=() 00:25:48.934 17:36:27 -- nvmf/common.sh@296 -- # local -ga x722 00:25:48.934 17:36:27 -- nvmf/common.sh@297 -- # mlx=() 00:25:48.934 17:36:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:48.934 17:36:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.934 17:36:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:48.934 17:36:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:48.934 17:36:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:48.934 17:36:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.934 17:36:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:48.934 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:48.934 17:36:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.934 17:36:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.934 17:36:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:48.935 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:48.935 17:36:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:48.935 17:36:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.935 17:36:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.935 17:36:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.935 17:36:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.935 17:36:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:48.935 Found net devices under 0000:af:00.0: cvl_0_0 00:25:48.935 17:36:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.935 17:36:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.935 17:36:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.935 17:36:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.935 17:36:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.935 17:36:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:48.935 Found net devices under 0000:af:00.1: cvl_0_1 00:25:48.935 17:36:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.935 17:36:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:48.935 17:36:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:48.935 17:36:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:48.935 17:36:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.935 17:36:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.935 17:36:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.935 17:36:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:48.935 17:36:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.935 17:36:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.935 17:36:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:48.935 17:36:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.935 17:36:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.935 17:36:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:48.935 17:36:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:48.935 17:36:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.935 17:36:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.935 17:36:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.935 17:36:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.935 17:36:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:48.935 17:36:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.935 17:36:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.935 17:36:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.935 17:36:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:48.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:48.935 00:25:48.935 --- 10.0.0.2 ping statistics --- 00:25:48.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.935 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:48.935 17:36:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:25:48.935 00:25:48.935 --- 10.0.0.1 ping statistics --- 00:25:48.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.935 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:48.935 17:36:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.935 17:36:27 -- nvmf/common.sh@410 -- # return 0 00:25:48.935 17:36:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:48.935 17:36:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.935 17:36:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:48.935 17:36:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.935 17:36:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:48.935 17:36:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:49.194 17:36:27 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:49.194 17:36:27 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:49.194 17:36:27 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:49.194 17:36:27 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:49.194 net.core.busy_poll = 1 00:25:49.194 17:36:27 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:49.194 net.core.busy_read = 1 00:25:49.194 17:36:27 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:49.194 17:36:27 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:49.194 17:36:28 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:49.194 17:36:28 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:49.194 17:36:28 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:49.194 17:36:28 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:49.194 17:36:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:49.194 17:36:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:49.194 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.454 17:36:28 -- nvmf/common.sh@469 -- # nvmfpid=27393 00:25:49.454 17:36:28 -- nvmf/common.sh@470 -- # waitforlisten 27393 00:25:49.454 17:36:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:49.454 17:36:28 -- common/autotest_common.sh@819 -- # '[' -z 27393 ']' 00:25:49.454 17:36:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.454 17:36:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:49.454 17:36:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.454 17:36:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:49.454 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.454 [2024-07-12 17:36:28.224614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:49.454 [2024-07-12 17:36:28.224675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.454 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.454 [2024-07-12 17:36:28.312993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.454 [2024-07-12 17:36:28.355854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:49.454 [2024-07-12 17:36:28.355998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.454 [2024-07-12 17:36:28.356010] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.454 [2024-07-12 17:36:28.356019] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.454 [2024-07-12 17:36:28.356070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.454 [2024-07-12 17:36:28.356171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.454 [2024-07-12 17:36:28.356264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.454 [2024-07-12 17:36:28.356266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.454 17:36:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:49.454 17:36:28 -- common/autotest_common.sh@852 -- # return 0 00:25:49.454 17:36:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:49.454 17:36:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:49.454 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.713 17:36:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.713 17:36:28 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:49.713 17:36:28 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:49.713 17:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.713 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.713 17:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.713 17:36:28 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:49.713 17:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.713 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.713 17:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.713 17:36:28 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:49.713 17:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.713 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.713 [2024-07-12 17:36:28.557680] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.713 17:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.713 17:36:28 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:49.713 17:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.713 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.713 Malloc1 00:25:49.713 17:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.713 17:36:28 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.713 17:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.713 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.713 17:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.713 17:36:28 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:49.713 17:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.713 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.713 17:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.713 17:36:28 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.713 17:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.713 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.713 [2024-07-12 17:36:28.609458] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.713 17:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.713 17:36:28 -- target/perf_adq.sh@94 -- # perfpid=27425 00:25:49.713 17:36:28 -- target/perf_adq.sh@95 -- # sleep 2 00:25:49.713 17:36:28 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:49.713 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.248 17:36:30 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:52.248 17:36:30 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:52.248 17:36:30 -- target/perf_adq.sh@97 -- # wc -l 00:25:52.248 17:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.248 17:36:30 -- common/autotest_common.sh@10 -- # set +x 00:25:52.248 17:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.248 17:36:30 -- target/perf_adq.sh@97 -- # count=2 00:25:52.248 17:36:30 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:52.248 17:36:30 -- target/perf_adq.sh@103 -- # wait 27425 00:26:00.359 Initializing NVMe Controllers 00:26:00.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:00.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:00.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:00.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:00.359 Initialization complete. Launching workers. 00:26:00.359 ======================================================== 00:26:00.359 Latency(us) 00:26:00.359 Device Information : IOPS MiB/s Average min max 00:26:00.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5554.20 21.70 11557.87 2534.82 59291.16 00:26:00.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6367.00 24.87 10053.81 1127.03 54300.73 00:26:00.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5614.20 21.93 11402.77 1513.60 58554.63 00:26:00.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10263.69 40.09 6254.41 1411.40 49112.55 00:26:00.359 ======================================================== 00:26:00.359 Total : 27799.09 108.59 9223.97 1127.03 59291.16 00:26:00.359 00:26:00.359 17:36:38 -- target/perf_adq.sh@104 -- # nvmftestfini 00:26:00.359 17:36:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:00.359 17:36:38 -- nvmf/common.sh@116 -- # sync 00:26:00.359 17:36:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:00.359 17:36:38 -- nvmf/common.sh@119 -- # set +e 00:26:00.360 17:36:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:00.360 17:36:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:00.360 rmmod nvme_tcp 00:26:00.360 rmmod nvme_fabrics 00:26:00.360 rmmod nvme_keyring 00:26:00.360 17:36:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:00.360 17:36:38 -- nvmf/common.sh@123 -- # set -e 00:26:00.360 17:36:38 -- nvmf/common.sh@124 -- # return 0 00:26:00.360 17:36:38 -- nvmf/common.sh@477 -- # '[' -n 27393 ']' 00:26:00.360 17:36:38 -- nvmf/common.sh@478 -- # killprocess 27393 00:26:00.360 17:36:38 -- common/autotest_common.sh@926 -- # '[' -z 27393 ']' 00:26:00.360 17:36:38 -- common/autotest_common.sh@930 -- # kill -0 27393 00:26:00.360 17:36:38 -- common/autotest_common.sh@931 -- # uname 00:26:00.360 17:36:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:00.360 17:36:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 27393 00:26:00.360 17:36:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:00.360 17:36:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:00.360 17:36:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 27393' 00:26:00.360 killing process with pid 27393 00:26:00.360 17:36:38 -- common/autotest_common.sh@945 -- # kill 27393 00:26:00.360 17:36:38 -- common/autotest_common.sh@950 -- # wait 27393 00:26:00.360 17:36:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:00.360 17:36:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:00.360 17:36:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:00.360 17:36:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:00.360 17:36:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:00.360 17:36:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.360 17:36:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.360 17:36:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.648 17:36:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:03.648 17:36:42 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:26:03.648 00:26:03.648 real 0m49.816s 00:26:03.648 user 2m43.840s 00:26:03.648 sys 0m9.629s 00:26:03.648 17:36:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.648 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:03.648 ************************************ 00:26:03.648 END TEST nvmf_perf_adq 00:26:03.648 ************************************ 00:26:03.648 17:36:42 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:03.648 17:36:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:03.648 17:36:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:03.648 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:03.648 ************************************ 00:26:03.648 START TEST nvmf_shutdown 00:26:03.648 ************************************ 00:26:03.648 17:36:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:03.648 * Looking for test storage... 00:26:03.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:03.648 17:36:42 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.648 17:36:42 -- nvmf/common.sh@7 -- # uname -s 00:26:03.648 17:36:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.648 17:36:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.648 17:36:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.648 17:36:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.648 17:36:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.648 17:36:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.648 17:36:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.648 17:36:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.648 17:36:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.648 17:36:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.648 17:36:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:03.648 17:36:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:03.648 17:36:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.648 17:36:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.648 17:36:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.648 17:36:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.648 17:36:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.648 17:36:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.648 17:36:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.648 17:36:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.648 17:36:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.648 17:36:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.648 17:36:42 -- paths/export.sh@5 -- # export PATH 00:26:03.648 17:36:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.648 17:36:42 -- nvmf/common.sh@46 -- # : 0 00:26:03.648 17:36:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:03.648 17:36:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:03.648 17:36:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:03.648 17:36:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.648 17:36:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.648 17:36:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:03.648 17:36:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:03.648 17:36:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:03.648 17:36:42 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:03.648 17:36:42 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:03.648 17:36:42 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:03.648 17:36:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:03.648 17:36:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:03.648 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:03.648 ************************************ 00:26:03.648 START TEST nvmf_shutdown_tc1 00:26:03.648 ************************************ 00:26:03.648 17:36:42 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:26:03.648 17:36:42 -- target/shutdown.sh@74 -- # starttarget 00:26:03.648 17:36:42 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:03.648 17:36:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:03.648 17:36:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.648 17:36:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:03.648 17:36:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:03.648 17:36:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:03.648 17:36:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.648 17:36:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.648 17:36:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.648 17:36:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:03.648 17:36:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:03.648 17:36:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:03.648 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:08.913 17:36:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:08.913 17:36:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:08.913 17:36:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:08.913 17:36:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:08.913 17:36:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:08.913 17:36:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:08.913 17:36:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:08.913 17:36:47 -- nvmf/common.sh@294 -- # net_devs=() 00:26:08.913 17:36:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:08.913 17:36:47 -- nvmf/common.sh@295 -- # e810=() 00:26:08.913 17:36:47 -- nvmf/common.sh@295 -- # local -ga e810 00:26:08.913 17:36:47 -- nvmf/common.sh@296 -- # x722=() 00:26:08.913 17:36:47 -- nvmf/common.sh@296 -- # local -ga x722 00:26:08.913 17:36:47 -- nvmf/common.sh@297 -- # mlx=() 00:26:08.913 17:36:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:08.913 17:36:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.913 17:36:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:08.913 17:36:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:08.913 17:36:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:08.913 17:36:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:08.913 17:36:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:08.913 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:08.913 17:36:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:08.913 17:36:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:08.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:08.913 17:36:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:08.913 17:36:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:08.913 17:36:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:08.913 17:36:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.913 17:36:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:08.913 17:36:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.913 17:36:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:08.913 Found net devices under 0000:af:00.0: cvl_0_0 00:26:08.913 17:36:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.913 17:36:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:08.913 17:36:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.914 17:36:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:08.914 17:36:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.914 17:36:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:08.914 Found net devices under 0000:af:00.1: cvl_0_1 00:26:08.914 17:36:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.914 17:36:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:08.914 17:36:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:08.914 17:36:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:08.914 17:36:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:08.914 17:36:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:08.914 17:36:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.914 17:36:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.914 17:36:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.914 17:36:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:08.914 17:36:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.914 17:36:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.914 17:36:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:08.914 17:36:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.914 17:36:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.914 17:36:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:08.914 17:36:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:08.914 17:36:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.914 17:36:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.914 17:36:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.914 17:36:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.914 17:36:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:08.914 17:36:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.914 17:36:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.914 17:36:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.914 17:36:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:08.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:26:08.914 00:26:08.914 --- 10.0.0.2 ping statistics --- 00:26:08.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.914 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:26:08.914 17:36:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:26:08.914 00:26:08.914 --- 10.0.0.1 ping statistics --- 00:26:08.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.914 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:26:08.914 17:36:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.914 17:36:47 -- nvmf/common.sh@410 -- # return 0 00:26:08.914 17:36:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:08.914 17:36:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.914 17:36:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:08.914 17:36:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:08.914 17:36:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.914 17:36:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:08.914 17:36:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:08.914 17:36:47 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:08.914 17:36:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:08.914 17:36:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:08.914 17:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:08.914 17:36:47 -- nvmf/common.sh@469 -- # nvmfpid=33037 00:26:08.914 17:36:47 -- nvmf/common.sh@470 -- # waitforlisten 33037 00:26:08.914 17:36:47 -- common/autotest_common.sh@819 -- # '[' -z 33037 ']' 00:26:08.914 17:36:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.914 17:36:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:08.914 17:36:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.914 17:36:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:08.914 17:36:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:08.914 17:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:08.914 [2024-07-12 17:36:47.437413] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:08.914 [2024-07-12 17:36:47.437467] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.914 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.914 [2024-07-12 17:36:47.514224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.914 [2024-07-12 17:36:47.557349] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:08.914 [2024-07-12 17:36:47.557495] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.914 [2024-07-12 17:36:47.557507] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.914 [2024-07-12 17:36:47.557516] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.914 [2024-07-12 17:36:47.557619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.914 [2024-07-12 17:36:47.557709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.914 [2024-07-12 17:36:47.557824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:08.914 [2024-07-12 17:36:47.557824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.482 17:36:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:09.482 17:36:48 -- common/autotest_common.sh@852 -- # return 0 00:26:09.482 17:36:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:09.482 17:36:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:09.482 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.482 17:36:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.482 17:36:48 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.482 17:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.482 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.482 [2024-07-12 17:36:48.425394] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.482 17:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.482 17:36:48 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:09.482 17:36:48 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:09.482 17:36:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:09.482 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.482 17:36:48 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:09.482 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.482 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.482 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.482 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.742 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.742 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.742 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.742 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.742 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.742 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.742 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:09.742 17:36:48 -- target/shutdown.sh@28 -- # cat 00:26:09.742 17:36:48 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:09.742 17:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.742 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.742 Malloc1 00:26:09.742 [2024-07-12 17:36:48.525591] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.742 Malloc2 00:26:09.742 Malloc3 00:26:09.742 Malloc4 00:26:09.742 Malloc5 00:26:10.002 Malloc6 00:26:10.002 Malloc7 00:26:10.002 Malloc8 00:26:10.002 Malloc9 00:26:10.002 Malloc10 00:26:10.002 17:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.002 17:36:48 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:10.002 17:36:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:10.002 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:10.002 17:36:48 -- target/shutdown.sh@78 -- # perfpid=33390 00:26:10.002 17:36:48 -- target/shutdown.sh@79 -- # waitforlisten 33390 /var/tmp/bdevperf.sock 00:26:10.002 17:36:48 -- common/autotest_common.sh@819 -- # '[' -z 33390 ']' 00:26:10.002 17:36:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:10.002 17:36:48 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:10.002 17:36:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:10.002 17:36:48 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:10.002 17:36:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:10.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:10.002 17:36:48 -- nvmf/common.sh@520 -- # config=() 00:26:10.002 17:36:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:10.002 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:10.002 17:36:48 -- nvmf/common.sh@520 -- # local subsystem config 00:26:10.002 17:36:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.002 17:36:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.002 { 00:26:10.002 "params": { 00:26:10.002 "name": "Nvme$subsystem", 00:26:10.002 "trtype": "$TEST_TRANSPORT", 00:26:10.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.002 "adrfam": "ipv4", 00:26:10.002 "trsvcid": "$NVMF_PORT", 00:26:10.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.002 "hdgst": ${hdgst:-false}, 00:26:10.002 "ddgst": ${ddgst:-false} 00:26:10.002 }, 00:26:10.002 "method": "bdev_nvme_attach_controller" 00:26:10.002 } 00:26:10.002 EOF 00:26:10.002 )") 00:26:10.002 17:36:48 -- nvmf/common.sh@542 -- # cat 00:26:10.261 17:36:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.261 { 00:26:10.261 "params": { 00:26:10.261 "name": "Nvme$subsystem", 00:26:10.261 "trtype": "$TEST_TRANSPORT", 00:26:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.261 "adrfam": "ipv4", 00:26:10.261 "trsvcid": "$NVMF_PORT", 00:26:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.261 "hdgst": ${hdgst:-false}, 00:26:10.261 "ddgst": ${ddgst:-false} 00:26:10.261 }, 00:26:10.261 "method": "bdev_nvme_attach_controller" 00:26:10.261 } 00:26:10.261 EOF 00:26:10.261 )") 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # cat 00:26:10.261 17:36:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.261 { 00:26:10.261 "params": { 00:26:10.261 "name": "Nvme$subsystem", 00:26:10.261 "trtype": "$TEST_TRANSPORT", 00:26:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.261 "adrfam": "ipv4", 00:26:10.261 "trsvcid": "$NVMF_PORT", 00:26:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.261 "hdgst": ${hdgst:-false}, 00:26:10.261 "ddgst": ${ddgst:-false} 00:26:10.261 }, 00:26:10.261 "method": "bdev_nvme_attach_controller" 00:26:10.261 } 00:26:10.261 EOF 00:26:10.261 )") 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # cat 00:26:10.261 17:36:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.261 { 00:26:10.261 "params": { 00:26:10.261 "name": "Nvme$subsystem", 00:26:10.261 "trtype": "$TEST_TRANSPORT", 00:26:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.261 "adrfam": "ipv4", 00:26:10.261 "trsvcid": "$NVMF_PORT", 00:26:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.261 "hdgst": ${hdgst:-false}, 00:26:10.261 "ddgst": ${ddgst:-false} 00:26:10.261 }, 00:26:10.261 "method": "bdev_nvme_attach_controller" 00:26:10.261 } 00:26:10.261 EOF 00:26:10.261 )") 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # cat 00:26:10.261 17:36:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.261 { 00:26:10.261 "params": { 00:26:10.261 "name": "Nvme$subsystem", 00:26:10.261 "trtype": "$TEST_TRANSPORT", 00:26:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.261 "adrfam": "ipv4", 00:26:10.261 "trsvcid": "$NVMF_PORT", 00:26:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.261 "hdgst": ${hdgst:-false}, 00:26:10.261 "ddgst": ${ddgst:-false} 00:26:10.261 }, 00:26:10.261 "method": "bdev_nvme_attach_controller" 00:26:10.261 } 00:26:10.261 EOF 00:26:10.261 )") 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # cat 00:26:10.261 17:36:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.261 17:36:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.261 { 00:26:10.261 "params": { 00:26:10.261 "name": "Nvme$subsystem", 00:26:10.261 "trtype": "$TEST_TRANSPORT", 00:26:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.261 "adrfam": "ipv4", 00:26:10.261 "trsvcid": "$NVMF_PORT", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.262 "hdgst": ${hdgst:-false}, 00:26:10.262 "ddgst": ${ddgst:-false} 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 } 00:26:10.262 EOF 00:26:10.262 )") 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # cat 00:26:10.262 17:36:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.262 { 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme$subsystem", 00:26:10.262 "trtype": "$TEST_TRANSPORT", 00:26:10.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "$NVMF_PORT", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.262 "hdgst": ${hdgst:-false}, 00:26:10.262 "ddgst": ${ddgst:-false} 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 } 00:26:10.262 EOF 00:26:10.262 )") 00:26:10.262 [2024-07-12 17:36:49.008223] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:10.262 [2024-07-12 17:36:49.008289] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # cat 00:26:10.262 17:36:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.262 { 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme$subsystem", 00:26:10.262 "trtype": "$TEST_TRANSPORT", 00:26:10.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "$NVMF_PORT", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.262 "hdgst": ${hdgst:-false}, 00:26:10.262 "ddgst": ${ddgst:-false} 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 } 00:26:10.262 EOF 00:26:10.262 )") 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # cat 00:26:10.262 17:36:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.262 { 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme$subsystem", 00:26:10.262 "trtype": "$TEST_TRANSPORT", 00:26:10.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "$NVMF_PORT", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.262 "hdgst": ${hdgst:-false}, 00:26:10.262 "ddgst": ${ddgst:-false} 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 } 00:26:10.262 EOF 00:26:10.262 )") 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # cat 00:26:10.262 17:36:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.262 { 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme$subsystem", 00:26:10.262 "trtype": "$TEST_TRANSPORT", 00:26:10.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "$NVMF_PORT", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.262 "hdgst": ${hdgst:-false}, 00:26:10.262 "ddgst": ${ddgst:-false} 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 } 00:26:10.262 EOF 00:26:10.262 )") 00:26:10.262 17:36:49 -- nvmf/common.sh@542 -- # cat 00:26:10.262 17:36:49 -- nvmf/common.sh@544 -- # jq . 00:26:10.262 17:36:49 -- nvmf/common.sh@545 -- # IFS=, 00:26:10.262 17:36:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme1", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme2", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme3", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme4", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme5", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme6", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme7", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme8", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme9", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 },{ 00:26:10.262 "params": { 00:26:10.262 "name": "Nvme10", 00:26:10.262 "trtype": "tcp", 00:26:10.262 "traddr": "10.0.0.2", 00:26:10.262 "adrfam": "ipv4", 00:26:10.262 "trsvcid": "4420", 00:26:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:10.262 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:10.262 "hdgst": false, 00:26:10.262 "ddgst": false 00:26:10.262 }, 00:26:10.262 "method": "bdev_nvme_attach_controller" 00:26:10.262 }' 00:26:10.262 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.262 [2024-07-12 17:36:49.093306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.262 [2024-07-12 17:36:49.133838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.166 17:36:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:12.166 17:36:50 -- common/autotest_common.sh@852 -- # return 0 00:26:12.166 17:36:50 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:12.167 17:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.167 17:36:50 -- common/autotest_common.sh@10 -- # set +x 00:26:12.167 17:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.167 17:36:50 -- target/shutdown.sh@83 -- # kill -9 33390 00:26:12.167 17:36:50 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:12.167 17:36:50 -- target/shutdown.sh@87 -- # sleep 1 00:26:12.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 33390 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:12.733 17:36:51 -- target/shutdown.sh@88 -- # kill -0 33037 00:26:12.993 17:36:51 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:12.993 17:36:51 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:12.993 17:36:51 -- nvmf/common.sh@520 -- # config=() 00:26:12.993 17:36:51 -- nvmf/common.sh@520 -- # local subsystem config 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.993 "name": "Nvme$subsystem", 00:26:12.993 "trtype": "$TEST_TRANSPORT", 00:26:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.993 "adrfam": "ipv4", 00:26:12.993 "trsvcid": "$NVMF_PORT", 00:26:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.993 "hdgst": ${hdgst:-false}, 00:26:12.993 "ddgst": ${ddgst:-false} 00:26:12.993 }, 00:26:12.993 "method": "bdev_nvme_attach_controller" 00:26:12.993 } 00:26:12.993 EOF 00:26:12.993 )") 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.993 "name": "Nvme$subsystem", 00:26:12.993 "trtype": "$TEST_TRANSPORT", 00:26:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.993 "adrfam": "ipv4", 00:26:12.993 "trsvcid": "$NVMF_PORT", 00:26:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.993 "hdgst": ${hdgst:-false}, 00:26:12.993 "ddgst": ${ddgst:-false} 00:26:12.993 }, 00:26:12.993 "method": "bdev_nvme_attach_controller" 00:26:12.993 } 00:26:12.993 EOF 00:26:12.993 )") 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.993 "name": "Nvme$subsystem", 00:26:12.993 "trtype": "$TEST_TRANSPORT", 00:26:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.993 "adrfam": "ipv4", 00:26:12.993 "trsvcid": "$NVMF_PORT", 00:26:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.993 "hdgst": ${hdgst:-false}, 00:26:12.993 "ddgst": ${ddgst:-false} 00:26:12.993 }, 00:26:12.993 "method": "bdev_nvme_attach_controller" 00:26:12.993 } 00:26:12.993 EOF 00:26:12.993 )") 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.993 "name": "Nvme$subsystem", 00:26:12.993 "trtype": "$TEST_TRANSPORT", 00:26:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.993 "adrfam": "ipv4", 00:26:12.993 "trsvcid": "$NVMF_PORT", 00:26:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.993 "hdgst": ${hdgst:-false}, 00:26:12.993 "ddgst": ${ddgst:-false} 00:26:12.993 }, 00:26:12.993 "method": "bdev_nvme_attach_controller" 00:26:12.993 } 00:26:12.993 EOF 00:26:12.993 )") 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.993 "name": "Nvme$subsystem", 00:26:12.993 "trtype": "$TEST_TRANSPORT", 00:26:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.993 "adrfam": "ipv4", 00:26:12.993 "trsvcid": "$NVMF_PORT", 00:26:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.993 "hdgst": ${hdgst:-false}, 00:26:12.993 "ddgst": ${ddgst:-false} 00:26:12.993 }, 00:26:12.993 "method": "bdev_nvme_attach_controller" 00:26:12.993 } 00:26:12.993 EOF 00:26:12.993 )") 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.993 "name": "Nvme$subsystem", 00:26:12.993 "trtype": "$TEST_TRANSPORT", 00:26:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.993 "adrfam": "ipv4", 00:26:12.993 "trsvcid": "$NVMF_PORT", 00:26:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.993 "hdgst": ${hdgst:-false}, 00:26:12.993 "ddgst": ${ddgst:-false} 00:26:12.993 }, 00:26:12.993 "method": "bdev_nvme_attach_controller" 00:26:12.993 } 00:26:12.993 EOF 00:26:12.993 )") 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.993 "name": "Nvme$subsystem", 00:26:12.993 "trtype": "$TEST_TRANSPORT", 00:26:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.993 "adrfam": "ipv4", 00:26:12.993 "trsvcid": "$NVMF_PORT", 00:26:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.993 "hdgst": ${hdgst:-false}, 00:26:12.993 "ddgst": ${ddgst:-false} 00:26:12.993 }, 00:26:12.993 "method": "bdev_nvme_attach_controller" 00:26:12.993 } 00:26:12.993 EOF 00:26:12.993 )") 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.993 [2024-07-12 17:36:51.747451] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:12.993 [2024-07-12 17:36:51.747510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid33820 ] 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.993 "name": "Nvme$subsystem", 00:26:12.993 "trtype": "$TEST_TRANSPORT", 00:26:12.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.993 "adrfam": "ipv4", 00:26:12.993 "trsvcid": "$NVMF_PORT", 00:26:12.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.993 "hdgst": ${hdgst:-false}, 00:26:12.993 "ddgst": ${ddgst:-false} 00:26:12.993 }, 00:26:12.993 "method": "bdev_nvme_attach_controller" 00:26:12.993 } 00:26:12.993 EOF 00:26:12.993 )") 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.993 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.993 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.993 { 00:26:12.993 "params": { 00:26:12.994 "name": "Nvme$subsystem", 00:26:12.994 "trtype": "$TEST_TRANSPORT", 00:26:12.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "$NVMF_PORT", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.994 "hdgst": ${hdgst:-false}, 00:26:12.994 "ddgst": ${ddgst:-false} 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 } 00:26:12.994 EOF 00:26:12.994 )") 00:26:12.994 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.994 17:36:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:12.994 17:36:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:12.994 { 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme$subsystem", 00:26:12.994 "trtype": "$TEST_TRANSPORT", 00:26:12.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "$NVMF_PORT", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.994 "hdgst": ${hdgst:-false}, 00:26:12.994 "ddgst": ${ddgst:-false} 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 } 00:26:12.994 EOF 00:26:12.994 )") 00:26:12.994 17:36:51 -- nvmf/common.sh@542 -- # cat 00:26:12.994 17:36:51 -- nvmf/common.sh@544 -- # jq . 00:26:12.994 17:36:51 -- nvmf/common.sh@545 -- # IFS=, 00:26:12.994 17:36:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme1", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme2", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme3", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme4", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme5", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme6", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme7", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme8", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme9", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 },{ 00:26:12.994 "params": { 00:26:12.994 "name": "Nvme10", 00:26:12.994 "trtype": "tcp", 00:26:12.994 "traddr": "10.0.0.2", 00:26:12.994 "adrfam": "ipv4", 00:26:12.994 "trsvcid": "4420", 00:26:12.994 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:12.994 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:12.994 "hdgst": false, 00:26:12.994 "ddgst": false 00:26:12.994 }, 00:26:12.994 "method": "bdev_nvme_attach_controller" 00:26:12.994 }' 00:26:12.994 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.994 [2024-07-12 17:36:51.829430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.994 [2024-07-12 17:36:51.870578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.372 Running I/O for 1 seconds... 00:26:15.750 00:26:15.750 Latency(us) 00:26:15.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.750 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.750 Verification LBA range: start 0x0 length 0x400 00:26:15.750 Nvme1n1 : 1.11 357.80 22.36 0.00 0.00 175624.50 23831.27 137268.13 00:26:15.750 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.750 Verification LBA range: start 0x0 length 0x400 00:26:15.750 Nvme2n1 : 1.08 329.15 20.57 0.00 0.00 189202.95 20018.27 148707.14 00:26:15.750 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.750 Verification LBA range: start 0x0 length 0x400 00:26:15.750 Nvme3n1 : 1.11 357.18 22.32 0.00 0.00 172811.46 23235.49 137268.13 00:26:15.750 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.750 Verification LBA range: start 0x0 length 0x400 00:26:15.750 Nvme4n1 : 1.12 355.42 22.21 0.00 0.00 171945.03 25976.09 136314.88 00:26:15.750 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.751 Verification LBA range: start 0x0 length 0x400 00:26:15.751 Nvme5n1 : 1.12 354.91 22.18 0.00 0.00 170581.32 26095.24 138221.38 00:26:15.751 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.751 Verification LBA range: start 0x0 length 0x400 00:26:15.751 Nvme6n1 : 1.10 325.18 20.32 0.00 0.00 183806.99 17039.36 163005.91 00:26:15.751 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.751 Verification LBA range: start 0x0 length 0x400 00:26:15.751 Nvme7n1 : 1.10 337.50 21.09 0.00 0.00 174423.72 14358.34 137268.13 00:26:15.751 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.751 Verification LBA range: start 0x0 length 0x400 00:26:15.751 Nvme8n1 : 1.11 320.59 20.04 0.00 0.00 183058.58 25022.84 143940.89 00:26:15.751 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.751 Verification LBA range: start 0x0 length 0x400 00:26:15.751 Nvme9n1 : 1.12 353.90 22.12 0.00 0.00 164780.08 27644.28 136314.88 00:26:15.751 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:15.751 Verification LBA range: start 0x0 length 0x400 00:26:15.751 Nvme10n1 : 1.12 324.85 20.30 0.00 0.00 177864.96 5719.51 142987.64 00:26:15.751 =================================================================================================================== 00:26:15.751 Total : 3416.48 213.53 0.00 0.00 176121.11 5719.51 163005.91 00:26:15.751 17:36:54 -- target/shutdown.sh@93 -- # stoptarget 00:26:15.751 17:36:54 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:15.751 17:36:54 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:15.751 17:36:54 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:15.751 17:36:54 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:15.751 17:36:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:15.751 17:36:54 -- nvmf/common.sh@116 -- # sync 00:26:15.751 17:36:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:15.751 17:36:54 -- nvmf/common.sh@119 -- # set +e 00:26:15.751 17:36:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:15.751 17:36:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:15.751 rmmod nvme_tcp 00:26:15.751 rmmod nvme_fabrics 00:26:15.751 rmmod nvme_keyring 00:26:15.751 17:36:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:15.751 17:36:54 -- nvmf/common.sh@123 -- # set -e 00:26:15.751 17:36:54 -- nvmf/common.sh@124 -- # return 0 00:26:15.751 17:36:54 -- nvmf/common.sh@477 -- # '[' -n 33037 ']' 00:26:15.751 17:36:54 -- nvmf/common.sh@478 -- # killprocess 33037 00:26:15.751 17:36:54 -- common/autotest_common.sh@926 -- # '[' -z 33037 ']' 00:26:15.751 17:36:54 -- common/autotest_common.sh@930 -- # kill -0 33037 00:26:15.751 17:36:54 -- common/autotest_common.sh@931 -- # uname 00:26:15.751 17:36:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:15.751 17:36:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 33037 00:26:15.751 17:36:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:15.751 17:36:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:15.751 17:36:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 33037' 00:26:15.751 killing process with pid 33037 00:26:15.751 17:36:54 -- common/autotest_common.sh@945 -- # kill 33037 00:26:15.751 17:36:54 -- common/autotest_common.sh@950 -- # wait 33037 00:26:16.318 17:36:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:16.318 17:36:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:16.318 17:36:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:16.318 17:36:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.318 17:36:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:16.318 17:36:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.318 17:36:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.318 17:36:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.223 17:36:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:18.223 00:26:18.223 real 0m14.767s 00:26:18.223 user 0m35.374s 00:26:18.223 sys 0m5.127s 00:26:18.223 17:36:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:18.223 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.223 ************************************ 00:26:18.223 END TEST nvmf_shutdown_tc1 00:26:18.223 ************************************ 00:26:18.223 17:36:57 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:18.223 17:36:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:18.223 17:36:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:18.223 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.223 ************************************ 00:26:18.223 START TEST nvmf_shutdown_tc2 00:26:18.223 ************************************ 00:26:18.223 17:36:57 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:26:18.223 17:36:57 -- target/shutdown.sh@98 -- # starttarget 00:26:18.223 17:36:57 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:18.223 17:36:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:18.223 17:36:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.223 17:36:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:18.223 17:36:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:18.223 17:36:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:18.223 17:36:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.223 17:36:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.223 17:36:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.223 17:36:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:18.223 17:36:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:18.223 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.223 17:36:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:18.223 17:36:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:18.223 17:36:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:18.223 17:36:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:18.223 17:36:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:18.223 17:36:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:18.223 17:36:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:18.223 17:36:57 -- nvmf/common.sh@294 -- # net_devs=() 00:26:18.223 17:36:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:18.223 17:36:57 -- nvmf/common.sh@295 -- # e810=() 00:26:18.223 17:36:57 -- nvmf/common.sh@295 -- # local -ga e810 00:26:18.223 17:36:57 -- nvmf/common.sh@296 -- # x722=() 00:26:18.223 17:36:57 -- nvmf/common.sh@296 -- # local -ga x722 00:26:18.223 17:36:57 -- nvmf/common.sh@297 -- # mlx=() 00:26:18.223 17:36:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:18.223 17:36:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.223 17:36:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:18.223 17:36:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:18.223 17:36:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:18.223 17:36:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:18.223 17:36:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:18.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:18.223 17:36:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:18.223 17:36:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:18.223 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:18.223 17:36:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:18.223 17:36:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:18.223 17:36:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.223 17:36:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:18.223 17:36:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.223 17:36:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:18.223 Found net devices under 0000:af:00.0: cvl_0_0 00:26:18.223 17:36:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.223 17:36:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:18.223 17:36:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.223 17:36:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:18.223 17:36:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.223 17:36:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:18.223 Found net devices under 0000:af:00.1: cvl_0_1 00:26:18.223 17:36:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.223 17:36:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:18.223 17:36:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:18.223 17:36:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:18.223 17:36:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:18.223 17:36:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.223 17:36:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.223 17:36:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.223 17:36:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:18.223 17:36:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.223 17:36:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.223 17:36:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:18.223 17:36:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.223 17:36:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.223 17:36:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:18.223 17:36:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:18.223 17:36:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.223 17:36:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.483 17:36:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.483 17:36:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.483 17:36:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:18.483 17:36:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.483 17:36:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.483 17:36:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.483 17:36:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:18.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:26:18.483 00:26:18.483 --- 10.0.0.2 ping statistics --- 00:26:18.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.483 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:26:18.483 17:36:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:26:18.483 00:26:18.483 --- 10.0.0.1 ping statistics --- 00:26:18.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.483 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:26:18.483 17:36:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.483 17:36:57 -- nvmf/common.sh@410 -- # return 0 00:26:18.483 17:36:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:18.483 17:36:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.483 17:36:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:18.483 17:36:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:18.483 17:36:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.483 17:36:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:18.483 17:36:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:18.742 17:36:57 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:18.742 17:36:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:18.742 17:36:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:18.742 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.742 17:36:57 -- nvmf/common.sh@469 -- # nvmfpid=34910 00:26:18.742 17:36:57 -- nvmf/common.sh@470 -- # waitforlisten 34910 00:26:18.742 17:36:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:18.742 17:36:57 -- common/autotest_common.sh@819 -- # '[' -z 34910 ']' 00:26:18.742 17:36:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.742 17:36:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:18.742 17:36:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.742 17:36:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:18.742 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.742 [2024-07-12 17:36:57.518517] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:18.742 [2024-07-12 17:36:57.518570] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.742 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.742 [2024-07-12 17:36:57.594887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.742 [2024-07-12 17:36:57.638123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:18.742 [2024-07-12 17:36:57.638264] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.742 [2024-07-12 17:36:57.638276] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.742 [2024-07-12 17:36:57.638285] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.742 [2024-07-12 17:36:57.638396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.742 [2024-07-12 17:36:57.638484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.742 [2024-07-12 17:36:57.638514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.742 [2024-07-12 17:36:57.638513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:19.680 17:36:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:19.680 17:36:58 -- common/autotest_common.sh@852 -- # return 0 00:26:19.680 17:36:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:19.680 17:36:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:19.680 17:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:19.680 17:36:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.680 17:36:58 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:19.680 17:36:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.680 17:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:19.680 [2024-07-12 17:36:58.495339] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.680 17:36:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.680 17:36:58 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:19.680 17:36:58 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:19.680 17:36:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:19.680 17:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:19.680 17:36:58 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:19.680 17:36:58 -- target/shutdown.sh@28 -- # cat 00:26:19.680 17:36:58 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:19.680 17:36:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.680 17:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:19.680 Malloc1 00:26:19.680 [2024-07-12 17:36:58.595574] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.680 Malloc2 00:26:19.939 Malloc3 00:26:19.939 Malloc4 00:26:19.939 Malloc5 00:26:19.939 Malloc6 00:26:19.939 Malloc7 00:26:19.939 Malloc8 00:26:20.198 Malloc9 00:26:20.198 Malloc10 00:26:20.198 17:36:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.198 17:36:58 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:20.198 17:36:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:20.198 17:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:20.198 17:36:59 -- target/shutdown.sh@102 -- # perfpid=35231 00:26:20.198 17:36:59 -- target/shutdown.sh@103 -- # waitforlisten 35231 /var/tmp/bdevperf.sock 00:26:20.198 17:36:59 -- common/autotest_common.sh@819 -- # '[' -z 35231 ']' 00:26:20.198 17:36:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:20.198 17:36:59 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:20.198 17:36:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:20.198 17:36:59 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:20.198 17:36:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:20.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:20.198 17:36:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:20.198 17:36:59 -- nvmf/common.sh@520 -- # config=() 00:26:20.198 17:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.198 17:36:59 -- nvmf/common.sh@520 -- # local subsystem config 00:26:20.198 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.198 { 00:26:20.198 "params": { 00:26:20.198 "name": "Nvme$subsystem", 00:26:20.198 "trtype": "$TEST_TRANSPORT", 00:26:20.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.198 "adrfam": "ipv4", 00:26:20.198 "trsvcid": "$NVMF_PORT", 00:26:20.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.198 "hdgst": ${hdgst:-false}, 00:26:20.198 "ddgst": ${ddgst:-false} 00:26:20.198 }, 00:26:20.198 "method": "bdev_nvme_attach_controller" 00:26:20.198 } 00:26:20.198 EOF 00:26:20.198 )") 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.198 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.198 { 00:26:20.198 "params": { 00:26:20.198 "name": "Nvme$subsystem", 00:26:20.198 "trtype": "$TEST_TRANSPORT", 00:26:20.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.198 "adrfam": "ipv4", 00:26:20.198 "trsvcid": "$NVMF_PORT", 00:26:20.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.198 "hdgst": ${hdgst:-false}, 00:26:20.198 "ddgst": ${ddgst:-false} 00:26:20.198 }, 00:26:20.198 "method": "bdev_nvme_attach_controller" 00:26:20.198 } 00:26:20.198 EOF 00:26:20.198 )") 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.198 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.198 { 00:26:20.198 "params": { 00:26:20.198 "name": "Nvme$subsystem", 00:26:20.198 "trtype": "$TEST_TRANSPORT", 00:26:20.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.198 "adrfam": "ipv4", 00:26:20.198 "trsvcid": "$NVMF_PORT", 00:26:20.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.198 "hdgst": ${hdgst:-false}, 00:26:20.198 "ddgst": ${ddgst:-false} 00:26:20.198 }, 00:26:20.198 "method": "bdev_nvme_attach_controller" 00:26:20.198 } 00:26:20.198 EOF 00:26:20.198 )") 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.198 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.198 { 00:26:20.198 "params": { 00:26:20.198 "name": "Nvme$subsystem", 00:26:20.198 "trtype": "$TEST_TRANSPORT", 00:26:20.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.198 "adrfam": "ipv4", 00:26:20.198 "trsvcid": "$NVMF_PORT", 00:26:20.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.198 "hdgst": ${hdgst:-false}, 00:26:20.198 "ddgst": ${ddgst:-false} 00:26:20.198 }, 00:26:20.198 "method": "bdev_nvme_attach_controller" 00:26:20.198 } 00:26:20.198 EOF 00:26:20.198 )") 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.198 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.198 { 00:26:20.198 "params": { 00:26:20.198 "name": "Nvme$subsystem", 00:26:20.198 "trtype": "$TEST_TRANSPORT", 00:26:20.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.198 "adrfam": "ipv4", 00:26:20.198 "trsvcid": "$NVMF_PORT", 00:26:20.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.198 "hdgst": ${hdgst:-false}, 00:26:20.198 "ddgst": ${ddgst:-false} 00:26:20.198 }, 00:26:20.198 "method": "bdev_nvme_attach_controller" 00:26:20.198 } 00:26:20.198 EOF 00:26:20.198 )") 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.198 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.198 { 00:26:20.198 "params": { 00:26:20.198 "name": "Nvme$subsystem", 00:26:20.198 "trtype": "$TEST_TRANSPORT", 00:26:20.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.198 "adrfam": "ipv4", 00:26:20.198 "trsvcid": "$NVMF_PORT", 00:26:20.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.198 "hdgst": ${hdgst:-false}, 00:26:20.198 "ddgst": ${ddgst:-false} 00:26:20.198 }, 00:26:20.198 "method": "bdev_nvme_attach_controller" 00:26:20.198 } 00:26:20.198 EOF 00:26:20.198 )") 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.198 [2024-07-12 17:36:59.075902] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:20.198 [2024-07-12 17:36:59.075945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35231 ] 00:26:20.198 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.198 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.198 { 00:26:20.198 "params": { 00:26:20.198 "name": "Nvme$subsystem", 00:26:20.198 "trtype": "$TEST_TRANSPORT", 00:26:20.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.198 "adrfam": "ipv4", 00:26:20.198 "trsvcid": "$NVMF_PORT", 00:26:20.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.198 "hdgst": ${hdgst:-false}, 00:26:20.198 "ddgst": ${ddgst:-false} 00:26:20.198 }, 00:26:20.198 "method": "bdev_nvme_attach_controller" 00:26:20.198 } 00:26:20.199 EOF 00:26:20.199 )") 00:26:20.199 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.199 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.199 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.199 { 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme$subsystem", 00:26:20.199 "trtype": "$TEST_TRANSPORT", 00:26:20.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "$NVMF_PORT", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.199 "hdgst": ${hdgst:-false}, 00:26:20.199 "ddgst": ${ddgst:-false} 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 } 00:26:20.199 EOF 00:26:20.199 )") 00:26:20.199 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.199 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.199 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.199 { 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme$subsystem", 00:26:20.199 "trtype": "$TEST_TRANSPORT", 00:26:20.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "$NVMF_PORT", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.199 "hdgst": ${hdgst:-false}, 00:26:20.199 "ddgst": ${ddgst:-false} 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 } 00:26:20.199 EOF 00:26:20.199 )") 00:26:20.199 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.199 17:36:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.199 17:36:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.199 { 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme$subsystem", 00:26:20.199 "trtype": "$TEST_TRANSPORT", 00:26:20.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "$NVMF_PORT", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.199 "hdgst": ${hdgst:-false}, 00:26:20.199 "ddgst": ${ddgst:-false} 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 } 00:26:20.199 EOF 00:26:20.199 )") 00:26:20.199 17:36:59 -- nvmf/common.sh@542 -- # cat 00:26:20.199 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.199 17:36:59 -- nvmf/common.sh@544 -- # jq . 00:26:20.199 17:36:59 -- nvmf/common.sh@545 -- # IFS=, 00:26:20.199 17:36:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme1", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme2", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme3", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme4", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme5", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme6", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme7", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme8", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme9", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 },{ 00:26:20.199 "params": { 00:26:20.199 "name": "Nvme10", 00:26:20.199 "trtype": "tcp", 00:26:20.199 "traddr": "10.0.0.2", 00:26:20.199 "adrfam": "ipv4", 00:26:20.199 "trsvcid": "4420", 00:26:20.199 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:20.199 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:20.199 "hdgst": false, 00:26:20.199 "ddgst": false 00:26:20.199 }, 00:26:20.199 "method": "bdev_nvme_attach_controller" 00:26:20.199 }' 00:26:20.199 [2024-07-12 17:36:59.147671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.458 [2024-07-12 17:36:59.188251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.847 Running I/O for 10 seconds... 00:26:21.847 17:37:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:21.847 17:37:00 -- common/autotest_common.sh@852 -- # return 0 00:26:21.847 17:37:00 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:21.847 17:37:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.847 17:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.847 17:37:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.847 17:37:00 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:21.847 17:37:00 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:21.847 17:37:00 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:21.847 17:37:00 -- target/shutdown.sh@57 -- # local ret=1 00:26:21.847 17:37:00 -- target/shutdown.sh@58 -- # local i 00:26:21.847 17:37:00 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:21.847 17:37:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:22.105 17:37:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:22.105 17:37:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:22.105 17:37:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.105 17:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:22.105 17:37:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.105 17:37:00 -- target/shutdown.sh@60 -- # read_io_count=129 00:26:22.105 17:37:00 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:26:22.105 17:37:00 -- target/shutdown.sh@64 -- # ret=0 00:26:22.105 17:37:00 -- target/shutdown.sh@65 -- # break 00:26:22.105 17:37:00 -- target/shutdown.sh@69 -- # return 0 00:26:22.105 17:37:00 -- target/shutdown.sh@109 -- # killprocess 35231 00:26:22.105 17:37:00 -- common/autotest_common.sh@926 -- # '[' -z 35231 ']' 00:26:22.105 17:37:00 -- common/autotest_common.sh@930 -- # kill -0 35231 00:26:22.105 17:37:00 -- common/autotest_common.sh@931 -- # uname 00:26:22.105 17:37:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:22.105 17:37:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 35231 00:26:22.105 17:37:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:22.105 17:37:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:22.105 17:37:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 35231' 00:26:22.105 killing process with pid 35231 00:26:22.105 17:37:00 -- common/autotest_common.sh@945 -- # kill 35231 00:26:22.105 17:37:00 -- common/autotest_common.sh@950 -- # wait 35231 00:26:22.105 Received shutdown signal, test time was about 0.553243 seconds 00:26:22.105 00:26:22.105 Latency(us) 00:26:22.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.105 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme1n1 : 0.53 359.43 22.46 0.00 0.00 170733.51 26571.87 137268.13 00:26:22.105 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme2n1 : 0.54 350.50 21.91 0.00 0.00 171861.10 27048.49 159192.90 00:26:22.105 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme3n1 : 0.53 356.43 22.28 0.00 0.00 164990.80 24188.74 133455.13 00:26:22.105 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme4n1 : 0.53 357.93 22.37 0.00 0.00 161831.90 26095.24 134408.38 00:26:22.105 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme5n1 : 0.53 355.35 22.21 0.00 0.00 159436.86 27763.43 136314.88 00:26:22.105 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme6n1 : 0.55 346.18 21.64 0.00 0.00 160015.71 25976.09 145847.39 00:26:22.105 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme7n1 : 0.54 348.92 21.81 0.00 0.00 154654.93 34793.66 139174.63 00:26:22.105 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme8n1 : 0.54 352.36 22.02 0.00 0.00 151055.78 28359.21 127735.62 00:26:22.105 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme9n1 : 0.55 347.31 21.71 0.00 0.00 150488.22 27763.43 142987.64 00:26:22.105 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.105 Verification LBA range: start 0x0 length 0x400 00:26:22.105 Nvme10n1 : 0.55 344.06 21.50 0.00 0.00 149282.86 26333.56 143940.89 00:26:22.105 =================================================================================================================== 00:26:22.105 Total : 3518.45 219.90 0.00 0.00 159435.17 24188.74 159192.90 00:26:22.364 17:37:01 -- target/shutdown.sh@112 -- # sleep 1 00:26:23.297 17:37:02 -- target/shutdown.sh@113 -- # kill -0 34910 00:26:23.297 17:37:02 -- target/shutdown.sh@115 -- # stoptarget 00:26:23.297 17:37:02 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:23.297 17:37:02 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:23.297 17:37:02 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:23.297 17:37:02 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:23.297 17:37:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:23.297 17:37:02 -- nvmf/common.sh@116 -- # sync 00:26:23.297 17:37:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:23.297 17:37:02 -- nvmf/common.sh@119 -- # set +e 00:26:23.297 17:37:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:23.297 17:37:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:23.297 rmmod nvme_tcp 00:26:23.555 rmmod nvme_fabrics 00:26:23.555 rmmod nvme_keyring 00:26:23.555 17:37:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:23.555 17:37:02 -- nvmf/common.sh@123 -- # set -e 00:26:23.555 17:37:02 -- nvmf/common.sh@124 -- # return 0 00:26:23.555 17:37:02 -- nvmf/common.sh@477 -- # '[' -n 34910 ']' 00:26:23.555 17:37:02 -- nvmf/common.sh@478 -- # killprocess 34910 00:26:23.555 17:37:02 -- common/autotest_common.sh@926 -- # '[' -z 34910 ']' 00:26:23.555 17:37:02 -- common/autotest_common.sh@930 -- # kill -0 34910 00:26:23.555 17:37:02 -- common/autotest_common.sh@931 -- # uname 00:26:23.555 17:37:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:23.555 17:37:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 34910 00:26:23.555 17:37:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:23.555 17:37:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:23.555 17:37:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 34910' 00:26:23.555 killing process with pid 34910 00:26:23.555 17:37:02 -- common/autotest_common.sh@945 -- # kill 34910 00:26:23.555 17:37:02 -- common/autotest_common.sh@950 -- # wait 34910 00:26:23.814 17:37:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:23.814 17:37:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:23.814 17:37:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:23.814 17:37:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.814 17:37:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:23.814 17:37:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.814 17:37:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.814 17:37:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.392 17:37:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:26.392 00:26:26.392 real 0m7.681s 00:26:26.392 user 0m22.923s 00:26:26.392 sys 0m1.273s 00:26:26.392 17:37:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.392 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:26.392 ************************************ 00:26:26.392 END TEST nvmf_shutdown_tc2 00:26:26.392 ************************************ 00:26:26.392 17:37:04 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:26.392 17:37:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:26.392 17:37:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:26.392 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:26.392 ************************************ 00:26:26.392 START TEST nvmf_shutdown_tc3 00:26:26.392 ************************************ 00:26:26.392 17:37:04 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:26:26.392 17:37:04 -- target/shutdown.sh@120 -- # starttarget 00:26:26.392 17:37:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:26.392 17:37:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:26.392 17:37:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.392 17:37:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:26.392 17:37:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:26.392 17:37:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:26.392 17:37:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.392 17:37:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.392 17:37:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.392 17:37:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:26.392 17:37:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:26.392 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:26.392 17:37:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:26.392 17:37:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:26.392 17:37:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:26.392 17:37:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:26.392 17:37:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:26.392 17:37:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:26.392 17:37:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:26.392 17:37:04 -- nvmf/common.sh@294 -- # net_devs=() 00:26:26.392 17:37:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:26.392 17:37:04 -- nvmf/common.sh@295 -- # e810=() 00:26:26.392 17:37:04 -- nvmf/common.sh@295 -- # local -ga e810 00:26:26.392 17:37:04 -- nvmf/common.sh@296 -- # x722=() 00:26:26.392 17:37:04 -- nvmf/common.sh@296 -- # local -ga x722 00:26:26.392 17:37:04 -- nvmf/common.sh@297 -- # mlx=() 00:26:26.392 17:37:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:26.392 17:37:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.392 17:37:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:26.392 17:37:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:26.392 17:37:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:26.392 17:37:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:26.392 17:37:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:26.392 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:26.392 17:37:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:26.392 17:37:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:26.392 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:26.392 17:37:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:26.392 17:37:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:26.392 17:37:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.392 17:37:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:26.392 17:37:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.392 17:37:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:26.392 Found net devices under 0000:af:00.0: cvl_0_0 00:26:26.392 17:37:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.392 17:37:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:26.392 17:37:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.392 17:37:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:26.392 17:37:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.392 17:37:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:26.392 Found net devices under 0000:af:00.1: cvl_0_1 00:26:26.392 17:37:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.392 17:37:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:26.392 17:37:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:26.392 17:37:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:26.392 17:37:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:26.392 17:37:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.392 17:37:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.392 17:37:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.392 17:37:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:26.392 17:37:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.392 17:37:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.392 17:37:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:26.392 17:37:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.392 17:37:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.392 17:37:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:26.392 17:37:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:26.392 17:37:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.392 17:37:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.392 17:37:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.392 17:37:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.392 17:37:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:26.392 17:37:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.392 17:37:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.392 17:37:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.392 17:37:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:26.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:26:26.392 00:26:26.392 --- 10.0.0.2 ping statistics --- 00:26:26.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.392 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:26:26.392 17:37:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:26:26.392 00:26:26.392 --- 10.0.0.1 ping statistics --- 00:26:26.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.392 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:26:26.392 17:37:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.392 17:37:05 -- nvmf/common.sh@410 -- # return 0 00:26:26.392 17:37:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:26.392 17:37:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.392 17:37:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:26.392 17:37:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:26.392 17:37:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.392 17:37:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:26.392 17:37:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:26.392 17:37:05 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:26.392 17:37:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:26.392 17:37:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:26.392 17:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:26.392 17:37:05 -- nvmf/common.sh@469 -- # nvmfpid=36442 00:26:26.392 17:37:05 -- nvmf/common.sh@470 -- # waitforlisten 36442 00:26:26.392 17:37:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:26.392 17:37:05 -- common/autotest_common.sh@819 -- # '[' -z 36442 ']' 00:26:26.392 17:37:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.392 17:37:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:26.393 17:37:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.393 17:37:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:26.393 17:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:26.393 [2024-07-12 17:37:05.259051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:26.393 [2024-07-12 17:37:05.259106] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.393 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.393 [2024-07-12 17:37:05.336278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.650 [2024-07-12 17:37:05.378739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:26.650 [2024-07-12 17:37:05.378883] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.650 [2024-07-12 17:37:05.378894] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.650 [2024-07-12 17:37:05.378904] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.650 [2024-07-12 17:37:05.379014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.650 [2024-07-12 17:37:05.379122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.650 [2024-07-12 17:37:05.379238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.650 [2024-07-12 17:37:05.379238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:27.215 17:37:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:27.215 17:37:06 -- common/autotest_common.sh@852 -- # return 0 00:26:27.215 17:37:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:27.215 17:37:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:27.215 17:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.474 17:37:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.474 17:37:06 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.474 17:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.474 17:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.474 [2024-07-12 17:37:06.225313] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.474 17:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.474 17:37:06 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:27.474 17:37:06 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:27.474 17:37:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:27.474 17:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.474 17:37:06 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:27.474 17:37:06 -- target/shutdown.sh@28 -- # cat 00:26:27.474 17:37:06 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:27.474 17:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.474 17:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.474 Malloc1 00:26:27.474 [2024-07-12 17:37:06.325494] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.474 Malloc2 00:26:27.474 Malloc3 00:26:27.474 Malloc4 00:26:27.732 Malloc5 00:26:27.732 Malloc6 00:26:27.732 Malloc7 00:26:27.732 Malloc8 00:26:27.732 Malloc9 00:26:27.732 Malloc10 00:26:27.991 17:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.991 17:37:06 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:27.991 17:37:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:27.991 17:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.991 17:37:06 -- target/shutdown.sh@124 -- # perfpid=36789 00:26:27.991 17:37:06 -- target/shutdown.sh@125 -- # waitforlisten 36789 /var/tmp/bdevperf.sock 00:26:27.991 17:37:06 -- common/autotest_common.sh@819 -- # '[' -z 36789 ']' 00:26:27.991 17:37:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.991 17:37:06 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:27.991 17:37:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:27.991 17:37:06 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:27.991 17:37:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.991 17:37:06 -- nvmf/common.sh@520 -- # config=() 00:26:27.991 17:37:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:27.991 17:37:06 -- nvmf/common.sh@520 -- # local subsystem config 00:26:27.991 17:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 [2024-07-12 17:37:06.805925] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:27.991 [2024-07-12 17:37:06.805983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid36789 ] 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.991 "hdgst": ${hdgst:-false}, 00:26:27.991 "ddgst": ${ddgst:-false} 00:26:27.991 }, 00:26:27.991 "method": "bdev_nvme_attach_controller" 00:26:27.991 } 00:26:27.991 EOF 00:26:27.991 )") 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.991 17:37:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.991 17:37:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.991 { 00:26:27.991 "params": { 00:26:27.991 "name": "Nvme$subsystem", 00:26:27.991 "trtype": "$TEST_TRANSPORT", 00:26:27.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.991 "adrfam": "ipv4", 00:26:27.991 "trsvcid": "$NVMF_PORT", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.992 "hdgst": ${hdgst:-false}, 00:26:27.992 "ddgst": ${ddgst:-false} 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 } 00:26:27.992 EOF 00:26:27.992 )") 00:26:27.992 17:37:06 -- nvmf/common.sh@542 -- # cat 00:26:27.992 17:37:06 -- nvmf/common.sh@544 -- # jq . 00:26:27.992 17:37:06 -- nvmf/common.sh@545 -- # IFS=, 00:26:27.992 17:37:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme1", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme2", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme3", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme4", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme5", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme6", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme7", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme8", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme9", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 },{ 00:26:27.992 "params": { 00:26:27.992 "name": "Nvme10", 00:26:27.992 "trtype": "tcp", 00:26:27.992 "traddr": "10.0.0.2", 00:26:27.992 "adrfam": "ipv4", 00:26:27.992 "trsvcid": "4420", 00:26:27.992 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:27.992 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:27.992 "hdgst": false, 00:26:27.992 "ddgst": false 00:26:27.992 }, 00:26:27.992 "method": "bdev_nvme_attach_controller" 00:26:27.992 }' 00:26:27.992 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.992 [2024-07-12 17:37:06.889666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.992 [2024-07-12 17:37:06.930279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.891 Running I/O for 10 seconds... 00:26:30.472 17:37:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:30.472 17:37:09 -- common/autotest_common.sh@852 -- # return 0 00:26:30.473 17:37:09 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:30.473 17:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.473 17:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:30.473 17:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.473 17:37:09 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:30.473 17:37:09 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:30.473 17:37:09 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:30.473 17:37:09 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:30.473 17:37:09 -- target/shutdown.sh@57 -- # local ret=1 00:26:30.473 17:37:09 -- target/shutdown.sh@58 -- # local i 00:26:30.473 17:37:09 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:30.473 17:37:09 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:30.473 17:37:09 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:30.473 17:37:09 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:30.473 17:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.473 17:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:30.473 17:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.473 17:37:09 -- target/shutdown.sh@60 -- # read_io_count=167 00:26:30.473 17:37:09 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:26:30.473 17:37:09 -- target/shutdown.sh@64 -- # ret=0 00:26:30.473 17:37:09 -- target/shutdown.sh@65 -- # break 00:26:30.473 17:37:09 -- target/shutdown.sh@69 -- # return 0 00:26:30.473 17:37:09 -- target/shutdown.sh@134 -- # killprocess 36442 00:26:30.473 17:37:09 -- common/autotest_common.sh@926 -- # '[' -z 36442 ']' 00:26:30.473 17:37:09 -- common/autotest_common.sh@930 -- # kill -0 36442 00:26:30.473 17:37:09 -- common/autotest_common.sh@931 -- # uname 00:26:30.473 17:37:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:30.473 17:37:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 36442 00:26:30.473 17:37:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:30.473 17:37:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:30.473 17:37:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 36442' 00:26:30.473 killing process with pid 36442 00:26:30.473 17:37:09 -- common/autotest_common.sh@945 -- # kill 36442 00:26:30.473 17:37:09 -- common/autotest_common.sh@950 -- # wait 36442 00:26:30.473 [2024-07-12 17:37:09.337449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.337846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4d30 is same with the state(5) to be set 00:26:30.473 [2024-07-12 17:37:09.343343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.473 [2024-07-12 17:37:09.343386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.473 [2024-07-12 17:37:09.343400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.473 [2024-07-12 17:37:09.343411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.473 [2024-07-12 17:37:09.343423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.473 [2024-07-12 17:37:09.343434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.474 [2024-07-12 17:37:09.343445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.474 [2024-07-12 17:37:09.343457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.474 [2024-07-12 17:37:09.343468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4190 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.349997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.350111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7460 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.352995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.353005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.353014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.353022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.353030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.353039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.353048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.353058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.474 [2024-07-12 17:37:09.353067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.353440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b51e0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.354311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e4190 (9): Bad file descriptor 00:26:30.475 [2024-07-12 17:37:09.354386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.475 [2024-07-12 17:37:09.354400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.354412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.475 [2024-07-12 17:37:09.354423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.354434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.475 [2024-07-12 17:37:09.354444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.354454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.475 [2024-07-12 17:37:09.354464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.354475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9208a0 is same with the state(5) to be set 00:26:30.475 [2024-07-12 17:37:09.356018] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:30.475 [2024-07-12 17:37:09.356076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.475 [2024-07-12 17:37:09.356486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.475 [2024-07-12 17:37:09.356499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.356983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.356992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.476 [2024-07-12 17:37:09.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.476 [2024-07-12 17:37:09.357425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.477 [2024-07-12 17:37:09.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.477 [2024-07-12 17:37:09.357447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.477 [2024-07-12 17:37:09.357456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.477 [2024-07-12 17:37:09.357471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.477 [2024-07-12 17:37:09.357481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.477 [2024-07-12 17:37:09.357493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.477 [2024-07-12 17:37:09.357502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.477 [2024-07-12 17:37:09.357592] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa1dd70 was disconnected and freed. reset controller. 00:26:30.477 [2024-07-12 17:37:09.358780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.358993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.359983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.360004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.360024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5690 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.363490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.477 [2024-07-12 17:37:09.363525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.363991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5b20 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.364994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.478 [2024-07-12 17:37:09.365146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.365345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5fd0 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.479 [2024-07-12 17:37:09.366749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.366759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.366767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.366776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.366785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.366793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.366808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.366817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6330 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.367375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa83d30 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.367522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83f7d0 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.367634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f1840 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.367753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.480 [2024-07-12 17:37:09.367828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.367838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaaca0 is same with the state(5) to be set 00:26:30.480 [2024-07-12 17:37:09.367869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9208a0 (9): Bad file descriptor 00:26:30.480 [2024-07-12 17:37:09.369981] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:30.480 [2024-07-12 17:37:09.370016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:30.480 [2024-07-12 17:37:09.370039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f1840 (9): Bad file descriptor 00:26:30.480 [2024-07-12 17:37:09.370090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.480 [2024-07-12 17:37:09.370520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.480 [2024-07-12 17:37:09.370534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.370978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.370988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.481 [2024-07-12 17:37:09.371347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.481 [2024-07-12 17:37:09.371359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.371370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.371383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.371393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.371405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.371414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.371426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.371436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.371448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.371459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.371472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.371481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.371494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.371503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.371516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.371526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.371536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1cab0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.371985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.372971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b67e0 is same with the state(5) to be set 00:26:30.482 [2024-07-12 17:37:09.373568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.482 [2024-07-12 17:37:09.374133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.374158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.374174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.374185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.374197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.374208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.374220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.482 [2024-07-12 17:37:09.374230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.482 [2024-07-12 17:37:09.374243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28416 len:12[2024-07-12 17:37:09.374888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with [2024-07-12 17:37:09.374901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:30.483 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31232 len:12[2024-07-12 17:37:09.374945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with [2024-07-12 17:37:09.374956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:30.483 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.374978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.483 [2024-07-12 17:37:09.374987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.483 [2024-07-12 17:37:09.374997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with [2024-07-12 17:37:09.374997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31488 len:1the state(5) to be set 00:26:30.483 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.483 [2024-07-12 17:37:09.375008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with [2024-07-12 17:37:09.375078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31872 len:1the state(5) to be set 00:26:30.484 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with [2024-07-12 17:37:09.375128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32128 len:12the state(5) to be set 00:26:30.484 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:37:09.375168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:37:09.375220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:37:09.375323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:37:09.375352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33280 len:12[2024-07-12 17:37:09.375395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:37:09.375407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6c70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 the state(5) to be set 00:26:30.484 [2024-07-12 17:37:09.375424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.484 [2024-07-12 17:37:09.375588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.484 [2024-07-12 17:37:09.375604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.375614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.375626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.375636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.375648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.375658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.375670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.375680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.375692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.375702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.375784] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa0ecf0 was disconnected and freed. reset controller. 00:26:30.485 [2024-07-12 17:37:09.376212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.376523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6fd0 is same with the state(5) to be set 00:26:30.485 [2024-07-12 17:37:09.378406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.378430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.378447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.378457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.378469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.378492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.378502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.378514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.378524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.378536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.378546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.378558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.378567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.485 [2024-07-12 17:37:09.378579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.485 [2024-07-12 17:37:09.378589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.378988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.378998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.486 [2024-07-12 17:37:09.379541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.486 [2024-07-12 17:37:09.379552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.379857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.379941] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa12d80 was disconnected and freed. reset controller. 00:26:30.487 [2024-07-12 17:37:09.379967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:30.487 [2024-07-12 17:37:09.380018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa59f0 (9): Bad file descriptor 00:26:30.487 [2024-07-12 17:37:09.380039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f1840 (104): Connection reset by peer 00:26:30.487 [2024-07-12 17:37:09.380060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e4190 (104): Connection reset by peer 00:26:30.487 [2024-07-12 17:37:09.380080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa83d30 (9): Bad file descriptor 00:26:30.487 [2024-07-12 17:37:09.380102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83f7d0 (9): Bad file descriptor 00:26:30.487 [2024-07-12 17:37:09.380124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaaca0 (9): Bad file descriptor 00:26:30.487 [2024-07-12 17:37:09.380159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91fc90 is same with the state(5) to be set 00:26:30.487 [2024-07-12 17:37:09.380297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76310 is same with the state(5) to be set 00:26:30.487 [2024-07-12 17:37:09.380418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.487 [2024-07-12 17:37:09.380492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.380502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x920cf0 is same with the state(5) to be set 00:26:30.487 [2024-07-12 17:37:09.380599] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:30.487 [2024-07-12 17:37:09.380658] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:30.487 [2024-07-12 17:37:09.380713] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:30.487 [2024-07-12 17:37:09.382195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:30.487 [2024-07-12 17:37:09.382221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91fc90 (9): Bad file descriptor 00:26:30.487 [2024-07-12 17:37:09.382245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f1840 (9): Bad file descriptor 00:26:30.487 [2024-07-12 17:37:09.382265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e4190 (9): Bad file descriptor 00:26:30.487 [2024-07-12 17:37:09.382789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.382808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.382823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.382834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.382848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.382858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.382871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.382881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.382893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.382904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.382916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.382926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.382938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.382948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.382960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.487 [2024-07-12 17:37:09.382970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.487 [2024-07-12 17:37:09.382982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.382992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.488 [2024-07-12 17:37:09.383940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.488 [2024-07-12 17:37:09.383949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.383968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.383978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.383991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.384245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.384260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa142a0 is same with the state(5) to be set 00:26:30.489 [2024-07-12 17:37:09.386013] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:30.489 [2024-07-12 17:37:09.386333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.489 [2024-07-12 17:37:09.386487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.489 [2024-07-12 17:37:09.386501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa59f0 with addr=10.0.0.2, port=4420 00:26:30.489 [2024-07-12 17:37:09.386512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa59f0 is same with the state(5) to be set 00:26:30.489 [2024-07-12 17:37:09.386533] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:30.489 [2024-07-12 17:37:09.386543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:30.489 [2024-07-12 17:37:09.386554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:30.489 [2024-07-12 17:37:09.386571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.489 [2024-07-12 17:37:09.386581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.489 [2024-07-12 17:37:09.386591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.489 [2024-07-12 17:37:09.386609] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.489 [2024-07-12 17:37:09.386624] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.489 [2024-07-12 17:37:09.386719] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:30.489 [2024-07-12 17:37:09.386774] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:30.489 [2024-07-12 17:37:09.387102] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.489 [2024-07-12 17:37:09.387115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.489 [2024-07-12 17:37:09.387366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.489 [2024-07-12 17:37:09.387592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.489 [2024-07-12 17:37:09.387606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91fc90 with addr=10.0.0.2, port=4420 00:26:30.489 [2024-07-12 17:37:09.387617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91fc90 is same with the state(5) to be set 00:26:30.489 [2024-07-12 17:37:09.387831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.489 [2024-07-12 17:37:09.388002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.489 [2024-07-12 17:37:09.388016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9208a0 with addr=10.0.0.2, port=4420 00:26:30.489 [2024-07-12 17:37:09.388026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9208a0 is same with the state(5) to be set 00:26:30.489 [2024-07-12 17:37:09.388040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa59f0 (9): Bad file descriptor 00:26:30.489 [2024-07-12 17:37:09.388429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91fc90 (9): Bad file descriptor 00:26:30.489 [2024-07-12 17:37:09.388452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9208a0 (9): Bad file descriptor 00:26:30.489 [2024-07-12 17:37:09.388464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:30.489 [2024-07-12 17:37:09.388474] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:30.489 [2024-07-12 17:37:09.388484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:30.489 [2024-07-12 17:37:09.388547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.489 [2024-07-12 17:37:09.388558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:30.489 [2024-07-12 17:37:09.388566] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:30.489 [2024-07-12 17:37:09.388576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:30.489 [2024-07-12 17:37:09.388590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:30.489 [2024-07-12 17:37:09.388599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:30.489 [2024-07-12 17:37:09.388609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:30.489 [2024-07-12 17:37:09.388658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.489 [2024-07-12 17:37:09.388669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.489 [2024-07-12 17:37:09.390026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76310 (9): Bad file descriptor 00:26:30.489 [2024-07-12 17:37:09.390052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x920cf0 (9): Bad file descriptor 00:26:30.489 [2024-07-12 17:37:09.390170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.390185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.390201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.390212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.390225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.390235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.390248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.390265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.390278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.390289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.489 [2024-07-12 17:37:09.390301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.489 [2024-07-12 17:37:09.390312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.390977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.390987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.490 [2024-07-12 17:37:09.391293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.490 [2024-07-12 17:37:09.391306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.391662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.391673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1f250 is same with the state(5) to be set 00:26:30.491 [2024-07-12 17:37:09.393112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.491 [2024-07-12 17:37:09.393717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.491 [2024-07-12 17:37:09.393728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.393984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.393996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.394612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.394623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa20090 is same with the state(5) to be set 00:26:30.492 [2024-07-12 17:37:09.396044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.396063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.396077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.396100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.492 [2024-07-12 17:37:09.396114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.492 [2024-07-12 17:37:09.396127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.396989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.396999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.397012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.397022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.397037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.397046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.397059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.493 [2024-07-12 17:37:09.397069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.493 [2024-07-12 17:37:09.397083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.494 [2024-07-12 17:37:09.397535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.494 [2024-07-12 17:37:09.397546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21460 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.398942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:30.494 [2024-07-12 17:37:09.398965] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:30.494 [2024-07-12 17:37:09.398980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:30.494 [2024-07-12 17:37:09.399411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.399668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.399684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaaca0 with addr=10.0.0.2, port=4420 00:26:30.494 [2024-07-12 17:37:09.399696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaaca0 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.399799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.399953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.399967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83f7d0 with addr=10.0.0.2, port=4420 00:26:30.494 [2024-07-12 17:37:09.399976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83f7d0 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.400065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.400260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.400276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa83d30 with addr=10.0.0.2, port=4420 00:26:30.494 [2024-07-12 17:37:09.400286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa83d30 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.401233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:30.494 [2024-07-12 17:37:09.401253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.494 [2024-07-12 17:37:09.401271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:30.494 [2024-07-12 17:37:09.401284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:30.494 [2024-07-12 17:37:09.401296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:30.494 [2024-07-12 17:37:09.401342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaaca0 (9): Bad file descriptor 00:26:30.494 [2024-07-12 17:37:09.401357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83f7d0 (9): Bad file descriptor 00:26:30.494 [2024-07-12 17:37:09.401370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa83d30 (9): Bad file descriptor 00:26:30.494 [2024-07-12 17:37:09.401718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.401973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.401990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa59f0 with addr=10.0.0.2, port=4420 00:26:30.494 [2024-07-12 17:37:09.402001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa59f0 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.402240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.402493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.402508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8e4190 with addr=10.0.0.2, port=4420 00:26:30.494 [2024-07-12 17:37:09.402519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4190 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.402623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.402854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.402870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f1840 with addr=10.0.0.2, port=4420 00:26:30.494 [2024-07-12 17:37:09.402888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f1840 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.402982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.403227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.403244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9208a0 with addr=10.0.0.2, port=4420 00:26:30.494 [2024-07-12 17:37:09.403259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9208a0 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.403451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.403648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.494 [2024-07-12 17:37:09.403663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91fc90 with addr=10.0.0.2, port=4420 00:26:30.494 [2024-07-12 17:37:09.403674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91fc90 is same with the state(5) to be set 00:26:30.494 [2024-07-12 17:37:09.403685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:30.495 [2024-07-12 17:37:09.403694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:30.495 [2024-07-12 17:37:09.403705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:30.495 [2024-07-12 17:37:09.403720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:30.495 [2024-07-12 17:37:09.403730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:30.495 [2024-07-12 17:37:09.403739] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:30.495 [2024-07-12 17:37:09.403752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:30.495 [2024-07-12 17:37:09.403767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:30.495 [2024-07-12 17:37:09.403777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:30.495 [2024-07-12 17:37:09.403856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.403870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.403885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.403896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.403909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.403920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.403933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.403943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.403956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.403966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.403983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.403994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.495 [2024-07-12 17:37:09.404756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.495 [2024-07-12 17:37:09.404766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.404980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.404990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.405334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.405345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa102d0 is same with the state(5) to be set 00:26:30.496 [2024-07-12 17:37:09.406762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.406984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.406994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.407006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.407016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.407029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.407039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.407053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.407063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.407076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.407086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.407098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.407108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.407120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.496 [2024-07-12 17:37:09.407134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.496 [2024-07-12 17:37:09.407146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.407986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.407996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.408009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.408019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.408030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.408041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.408053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.408062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.408074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.408084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.497 [2024-07-12 17:37:09.408097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.497 [2024-07-12 17:37:09.408106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.498 [2024-07-12 17:37:09.408119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.498 [2024-07-12 17:37:09.408129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.498 [2024-07-12 17:37:09.408144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.498 [2024-07-12 17:37:09.408154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.498 [2024-07-12 17:37:09.408167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.498 [2024-07-12 17:37:09.408177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.498 [2024-07-12 17:37:09.408189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.498 [2024-07-12 17:37:09.408199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.498 [2024-07-12 17:37:09.408211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.498 [2024-07-12 17:37:09.408221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.498 [2024-07-12 17:37:09.408233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa118b0 is same with the state(5) to be set 00:26:30.498 [2024-07-12 17:37:09.409932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.498 [2024-07-12 17:37:09.409949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.498 [2024-07-12 17:37:09.409958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.498 [2024-07-12 17:37:09.409972] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:30.758 task offset: 29184 on job bdev=Nvme2n1 fails 00:26:30.758 00:26:30.758 Latency(us) 00:26:30.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.758 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme1n1 ended in about 0.72 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme1n1 : 0.72 289.06 18.07 88.94 0.00 167806.33 92465.34 154426.65 00:26:30.758 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme2n1 ended in about 0.72 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme2n1 : 0.72 290.52 18.16 89.39 0.00 164704.76 25737.77 167772.16 00:26:30.758 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme3n1 ended in about 0.74 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme3n1 : 0.74 281.21 17.58 86.53 0.00 168020.99 99138.09 137268.13 00:26:30.758 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme4n1 ended in about 0.74 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme4n1 : 0.74 280.10 17.51 86.19 0.00 166425.52 85315.96 134408.38 00:26:30.758 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme5n1 ended in about 0.75 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme5n1 : 0.75 279.01 17.44 85.85 0.00 164822.16 89605.59 135361.63 00:26:30.758 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme6n1 ended in about 0.72 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme6n1 : 0.72 286.91 17.93 88.28 0.00 157710.02 22043.93 143940.89 00:26:30.758 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme7n1 ended in about 0.75 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme7n1 : 0.75 276.12 17.26 84.96 0.00 162124.50 74830.20 137268.13 00:26:30.758 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme8n1 ended in about 0.76 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme8n1 : 0.76 275.07 17.19 84.64 0.00 160465.07 79596.45 136314.88 00:26:30.758 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme9n1 ended in about 0.73 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme9n1 : 0.73 285.39 17.84 87.81 0.00 151755.38 14000.87 142034.39 00:26:30.758 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.758 Job: Nvme10n1 ended in about 0.73 seconds with error 00:26:30.758 Verification LBA range: start 0x0 length 0x400 00:26:30.758 Nvme10n1 : 0.73 223.96 14.00 87.40 0.00 179343.13 13702.98 144894.14 00:26:30.758 =================================================================================================================== 00:26:30.758 Total : 2767.36 172.96 869.98 0.00 164070.73 13702.98 167772.16 00:26:30.758 [2024-07-12 17:37:09.443355] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:30.758 [2024-07-12 17:37:09.443397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:30.758 [2024-07-12 17:37:09.443454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa59f0 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.443472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e4190 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.443486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f1840 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.443506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9208a0 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.443519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91fc90 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.443990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.444199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.444217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa76310 with addr=10.0.0.2, port=4420 00:26:30.758 [2024-07-12 17:37:09.444230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76310 is same with the state(5) to be set 00:26:30.758 [2024-07-12 17:37:09.444426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.444730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.444747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x920cf0 with addr=10.0.0.2, port=4420 00:26:30.758 [2024-07-12 17:37:09.444758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x920cf0 is same with the state(5) to be set 00:26:30.758 [2024-07-12 17:37:09.444770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.444779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.444791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:30.758 [2024-07-12 17:37:09.444806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.444815] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.444826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.758 [2024-07-12 17:37:09.444840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.444849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.444859] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:30.758 [2024-07-12 17:37:09.444872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.444881] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.444890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:30.758 [2024-07-12 17:37:09.444903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.444913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.444922] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:30.758 [2024-07-12 17:37:09.444946] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.758 [2024-07-12 17:37:09.444961] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.758 [2024-07-12 17:37:09.444974] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.758 [2024-07-12 17:37:09.444989] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.758 [2024-07-12 17:37:09.445002] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.758 [2024-07-12 17:37:09.445033] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.758 [2024-07-12 17:37:09.445048] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.758 [2024-07-12 17:37:09.445062] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.758 [2024-07-12 17:37:09.445714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:30.758 [2024-07-12 17:37:09.445733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:30.758 [2024-07-12 17:37:09.445745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:30.758 [2024-07-12 17:37:09.445775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.445785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.445793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.445802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.445836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76310 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.445851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x920cf0 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.445895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.446189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.446442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.446458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa83d30 with addr=10.0.0.2, port=4420 00:26:30.758 [2024-07-12 17:37:09.446469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa83d30 is same with the state(5) to be set 00:26:30.758 [2024-07-12 17:37:09.446618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.446796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.446811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83f7d0 with addr=10.0.0.2, port=4420 00:26:30.758 [2024-07-12 17:37:09.446821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83f7d0 is same with the state(5) to be set 00:26:30.758 [2024-07-12 17:37:09.446970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.447189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-07-12 17:37:09.447206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaaca0 with addr=10.0.0.2, port=4420 00:26:30.758 [2024-07-12 17:37:09.447216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaaca0 is same with the state(5) to be set 00:26:30.758 [2024-07-12 17:37:09.447227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.447236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.447246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:30.758 [2024-07-12 17:37:09.447278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.447288] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.447298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:30.758 [2024-07-12 17:37:09.447357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.447369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.447380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa83d30 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.447393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83f7d0 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.447406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaaca0 (9): Bad file descriptor 00:26:30.758 [2024-07-12 17:37:09.447443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.447455] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.447464] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:30.758 [2024-07-12 17:37:09.447476] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.447485] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.447494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:30.758 [2024-07-12 17:37:09.447507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:30.758 [2024-07-12 17:37:09.447516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:30.758 [2024-07-12 17:37:09.447525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:30.758 [2024-07-12 17:37:09.447560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.447571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.758 [2024-07-12 17:37:09.447579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.016 17:37:09 -- target/shutdown.sh@135 -- # nvmfpid= 00:26:31.017 17:37:09 -- target/shutdown.sh@138 -- # sleep 1 00:26:31.951 17:37:10 -- target/shutdown.sh@141 -- # kill -9 36789 00:26:31.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (36789) - No such process 00:26:31.951 17:37:10 -- target/shutdown.sh@141 -- # true 00:26:31.951 17:37:10 -- target/shutdown.sh@143 -- # stoptarget 00:26:31.951 17:37:10 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:31.951 17:37:10 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:31.951 17:37:10 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:31.951 17:37:10 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:31.951 17:37:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:31.951 17:37:10 -- nvmf/common.sh@116 -- # sync 00:26:31.951 17:37:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:31.951 17:37:10 -- nvmf/common.sh@119 -- # set +e 00:26:31.951 17:37:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:31.951 17:37:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:31.951 rmmod nvme_tcp 00:26:31.951 rmmod nvme_fabrics 00:26:31.951 rmmod nvme_keyring 00:26:31.951 17:37:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:31.951 17:37:10 -- nvmf/common.sh@123 -- # set -e 00:26:31.951 17:37:10 -- nvmf/common.sh@124 -- # return 0 00:26:31.951 17:37:10 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:26:31.951 17:37:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:31.951 17:37:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:31.951 17:37:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:31.951 17:37:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.951 17:37:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:31.951 17:37:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.951 17:37:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.951 17:37:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.484 17:37:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:34.484 00:26:34.484 real 0m8.066s 00:26:34.484 user 0m20.888s 00:26:34.484 sys 0m1.363s 00:26:34.484 17:37:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.484 17:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:34.484 ************************************ 00:26:34.484 END TEST nvmf_shutdown_tc3 00:26:34.484 ************************************ 00:26:34.484 17:37:12 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:26:34.484 00:26:34.484 real 0m30.752s 00:26:34.484 user 1m19.293s 00:26:34.484 sys 0m7.923s 00:26:34.484 17:37:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.484 17:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:34.484 ************************************ 00:26:34.484 END TEST nvmf_shutdown 00:26:34.484 ************************************ 00:26:34.484 17:37:12 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:26:34.484 17:37:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:34.484 17:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:34.484 17:37:13 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:26:34.484 17:37:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:34.484 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:34.484 17:37:13 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:26:34.484 17:37:13 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:34.484 17:37:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:34.484 17:37:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:34.484 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:34.484 ************************************ 00:26:34.484 START TEST nvmf_multicontroller 00:26:34.484 ************************************ 00:26:34.484 17:37:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:34.484 * Looking for test storage... 00:26:34.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.484 17:37:13 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.484 17:37:13 -- nvmf/common.sh@7 -- # uname -s 00:26:34.484 17:37:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.484 17:37:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.484 17:37:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.484 17:37:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.484 17:37:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.484 17:37:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.484 17:37:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.484 17:37:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.484 17:37:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.484 17:37:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.484 17:37:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:34.484 17:37:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:34.484 17:37:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.484 17:37:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.484 17:37:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.484 17:37:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.484 17:37:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.484 17:37:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.484 17:37:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.484 17:37:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.484 17:37:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.484 17:37:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.484 17:37:13 -- paths/export.sh@5 -- # export PATH 00:26:34.484 17:37:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.484 17:37:13 -- nvmf/common.sh@46 -- # : 0 00:26:34.484 17:37:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:34.484 17:37:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:34.484 17:37:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:34.484 17:37:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.484 17:37:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.484 17:37:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:34.484 17:37:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:34.484 17:37:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:34.484 17:37:13 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:34.484 17:37:13 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:34.484 17:37:13 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:34.485 17:37:13 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:34.485 17:37:13 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:34.485 17:37:13 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:34.485 17:37:13 -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:34.485 17:37:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:34.485 17:37:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.485 17:37:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:34.485 17:37:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:34.485 17:37:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:34.485 17:37:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.485 17:37:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.485 17:37:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.485 17:37:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:34.485 17:37:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:34.485 17:37:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:34.485 17:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.755 17:37:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:39.755 17:37:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:39.755 17:37:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:39.755 17:37:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:39.755 17:37:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:39.755 17:37:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:39.755 17:37:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:39.755 17:37:18 -- nvmf/common.sh@294 -- # net_devs=() 00:26:39.755 17:37:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:39.755 17:37:18 -- nvmf/common.sh@295 -- # e810=() 00:26:39.755 17:37:18 -- nvmf/common.sh@295 -- # local -ga e810 00:26:39.755 17:37:18 -- nvmf/common.sh@296 -- # x722=() 00:26:39.755 17:37:18 -- nvmf/common.sh@296 -- # local -ga x722 00:26:39.755 17:37:18 -- nvmf/common.sh@297 -- # mlx=() 00:26:39.755 17:37:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:39.755 17:37:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.755 17:37:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:39.755 17:37:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:39.755 17:37:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:39.755 17:37:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:39.755 17:37:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:39.755 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:39.755 17:37:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:39.755 17:37:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:39.755 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:39.755 17:37:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:39.755 17:37:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:39.755 17:37:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:39.755 17:37:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.755 17:37:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:39.755 17:37:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.755 17:37:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:39.755 Found net devices under 0000:af:00.0: cvl_0_0 00:26:39.755 17:37:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.755 17:37:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:39.756 17:37:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.756 17:37:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:39.756 17:37:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.756 17:37:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:39.756 Found net devices under 0000:af:00.1: cvl_0_1 00:26:39.756 17:37:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.756 17:37:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:39.756 17:37:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:39.756 17:37:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:39.756 17:37:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:39.756 17:37:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:39.756 17:37:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.756 17:37:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.756 17:37:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.756 17:37:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:39.756 17:37:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.756 17:37:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.756 17:37:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:39.756 17:37:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.756 17:37:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.756 17:37:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:39.756 17:37:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:39.756 17:37:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.756 17:37:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.756 17:37:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.756 17:37:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.756 17:37:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:39.756 17:37:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.756 17:37:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.015 17:37:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.015 17:37:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:40.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:26:40.015 00:26:40.015 --- 10.0.0.2 ping statistics --- 00:26:40.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.015 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:26:40.015 17:37:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:26:40.015 00:26:40.015 --- 10.0.0.1 ping statistics --- 00:26:40.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.015 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:26:40.015 17:37:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.015 17:37:18 -- nvmf/common.sh@410 -- # return 0 00:26:40.015 17:37:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:40.015 17:37:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.015 17:37:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:40.015 17:37:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:40.015 17:37:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.015 17:37:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:40.015 17:37:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:40.015 17:37:18 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:40.015 17:37:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:40.016 17:37:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:40.016 17:37:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.016 17:37:18 -- nvmf/common.sh@469 -- # nvmfpid=41161 00:26:40.016 17:37:18 -- nvmf/common.sh@470 -- # waitforlisten 41161 00:26:40.016 17:37:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:40.016 17:37:18 -- common/autotest_common.sh@819 -- # '[' -z 41161 ']' 00:26:40.016 17:37:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.016 17:37:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:40.016 17:37:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.016 17:37:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:40.016 17:37:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.016 [2024-07-12 17:37:18.834402] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:40.016 [2024-07-12 17:37:18.834457] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.016 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.016 [2024-07-12 17:37:18.911473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:40.016 [2024-07-12 17:37:18.953538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:40.016 [2024-07-12 17:37:18.953694] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.016 [2024-07-12 17:37:18.953706] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.016 [2024-07-12 17:37:18.953716] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.016 [2024-07-12 17:37:18.953828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.016 [2024-07-12 17:37:18.953922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.016 [2024-07-12 17:37:18.953925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.954 17:37:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:40.955 17:37:19 -- common/autotest_common.sh@852 -- # return 0 00:26:40.955 17:37:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:40.955 17:37:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 17:37:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.955 17:37:19 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:40.955 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 [2024-07-12 17:37:19.811721] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.955 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.955 17:37:19 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.955 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 Malloc0 00:26:40.955 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.955 17:37:19 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.955 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.955 17:37:19 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.955 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.955 17:37:19 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.955 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 [2024-07-12 17:37:19.882057] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.955 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.955 17:37:19 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:40.955 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 [2024-07-12 17:37:19.890001] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:40.955 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.955 17:37:19 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:40.955 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 Malloc1 00:26:40.955 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.955 17:37:19 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:40.955 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.955 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:41.215 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.215 17:37:19 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:41.215 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.215 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:41.215 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.215 17:37:19 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:41.215 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.215 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:41.215 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.215 17:37:19 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:41.215 17:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.215 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:41.215 17:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.215 17:37:19 -- host/multicontroller.sh@44 -- # bdevperf_pid=41343 00:26:41.215 17:37:19 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:41.215 17:37:19 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.215 17:37:19 -- host/multicontroller.sh@47 -- # waitforlisten 41343 /var/tmp/bdevperf.sock 00:26:41.215 17:37:19 -- common/autotest_common.sh@819 -- # '[' -z 41343 ']' 00:26:41.215 17:37:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.215 17:37:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:41.215 17:37:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.215 17:37:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:41.215 17:37:19 -- common/autotest_common.sh@10 -- # set +x 00:26:42.152 17:37:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:42.152 17:37:20 -- common/autotest_common.sh@852 -- # return 0 00:26:42.152 17:37:20 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:42.152 17:37:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.152 17:37:20 -- common/autotest_common.sh@10 -- # set +x 00:26:42.152 NVMe0n1 00:26:42.152 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.152 17:37:21 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:42.152 17:37:21 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:42.152 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.152 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.152 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.152 1 00:26:42.152 17:37:21 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:42.152 17:37:21 -- common/autotest_common.sh@640 -- # local es=0 00:26:42.152 17:37:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:42.152 17:37:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:42.152 17:37:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:42.152 17:37:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:42.152 17:37:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:42.152 17:37:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:42.152 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.152 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.152 request: 00:26:42.152 { 00:26:42.152 "name": "NVMe0", 00:26:42.152 "trtype": "tcp", 00:26:42.152 "traddr": "10.0.0.2", 00:26:42.152 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:42.152 "hostaddr": "10.0.0.2", 00:26:42.152 "hostsvcid": "60000", 00:26:42.152 "adrfam": "ipv4", 00:26:42.152 "trsvcid": "4420", 00:26:42.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.152 "method": "bdev_nvme_attach_controller", 00:26:42.152 "req_id": 1 00:26:42.152 } 00:26:42.152 Got JSON-RPC error response 00:26:42.152 response: 00:26:42.152 { 00:26:42.152 "code": -114, 00:26:42.152 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:42.152 } 00:26:42.152 17:37:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:42.152 17:37:21 -- common/autotest_common.sh@643 -- # es=1 00:26:42.152 17:37:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:42.152 17:37:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:42.152 17:37:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:42.152 17:37:21 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:42.152 17:37:21 -- common/autotest_common.sh@640 -- # local es=0 00:26:42.152 17:37:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:42.152 17:37:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:42.152 17:37:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:42.152 17:37:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:42.152 17:37:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:42.152 17:37:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:42.152 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.152 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.152 request: 00:26:42.152 { 00:26:42.152 "name": "NVMe0", 00:26:42.152 "trtype": "tcp", 00:26:42.152 "traddr": "10.0.0.2", 00:26:42.152 "hostaddr": "10.0.0.2", 00:26:42.152 "hostsvcid": "60000", 00:26:42.152 "adrfam": "ipv4", 00:26:42.152 "trsvcid": "4420", 00:26:42.152 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:42.152 "method": "bdev_nvme_attach_controller", 00:26:42.152 "req_id": 1 00:26:42.152 } 00:26:42.152 Got JSON-RPC error response 00:26:42.152 response: 00:26:42.152 { 00:26:42.152 "code": -114, 00:26:42.152 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:42.152 } 00:26:42.152 17:37:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:42.152 17:37:21 -- common/autotest_common.sh@643 -- # es=1 00:26:42.152 17:37:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:42.152 17:37:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:42.152 17:37:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:42.152 17:37:21 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:42.152 17:37:21 -- common/autotest_common.sh@640 -- # local es=0 00:26:42.152 17:37:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:42.152 17:37:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:42.153 17:37:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:42.153 17:37:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:42.153 17:37:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:42.153 17:37:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:42.153 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.153 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.153 request: 00:26:42.153 { 00:26:42.153 "name": "NVMe0", 00:26:42.153 "trtype": "tcp", 00:26:42.153 "traddr": "10.0.0.2", 00:26:42.153 "hostaddr": "10.0.0.2", 00:26:42.153 "hostsvcid": "60000", 00:26:42.153 "adrfam": "ipv4", 00:26:42.153 "trsvcid": "4420", 00:26:42.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.153 "multipath": "disable", 00:26:42.153 "method": "bdev_nvme_attach_controller", 00:26:42.153 "req_id": 1 00:26:42.153 } 00:26:42.153 Got JSON-RPC error response 00:26:42.153 response: 00:26:42.153 { 00:26:42.153 "code": -114, 00:26:42.153 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:42.153 } 00:26:42.153 17:37:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:42.153 17:37:21 -- common/autotest_common.sh@643 -- # es=1 00:26:42.153 17:37:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:42.153 17:37:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:42.153 17:37:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:42.153 17:37:21 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:42.153 17:37:21 -- common/autotest_common.sh@640 -- # local es=0 00:26:42.153 17:37:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:42.153 17:37:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:42.153 17:37:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:42.153 17:37:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:42.153 17:37:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:42.153 17:37:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:42.153 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.153 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.153 request: 00:26:42.153 { 00:26:42.153 "name": "NVMe0", 00:26:42.153 "trtype": "tcp", 00:26:42.153 "traddr": "10.0.0.2", 00:26:42.153 "hostaddr": "10.0.0.2", 00:26:42.153 "hostsvcid": "60000", 00:26:42.153 "adrfam": "ipv4", 00:26:42.153 "trsvcid": "4420", 00:26:42.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.153 "multipath": "failover", 00:26:42.153 "method": "bdev_nvme_attach_controller", 00:26:42.153 "req_id": 1 00:26:42.153 } 00:26:42.153 Got JSON-RPC error response 00:26:42.153 response: 00:26:42.153 { 00:26:42.153 "code": -114, 00:26:42.153 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:42.153 } 00:26:42.153 17:37:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:42.153 17:37:21 -- common/autotest_common.sh@643 -- # es=1 00:26:42.153 17:37:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:42.153 17:37:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:42.153 17:37:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:42.153 17:37:21 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:42.153 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.153 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.412 00:26:42.412 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.412 17:37:21 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:42.412 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.412 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.412 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.412 17:37:21 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:42.412 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.412 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.671 00:26:42.671 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.671 17:37:21 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:42.671 17:37:21 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:42.671 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.671 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.671 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.671 17:37:21 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:42.671 17:37:21 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:43.606 0 00:26:43.607 17:37:22 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:43.607 17:37:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:43.607 17:37:22 -- common/autotest_common.sh@10 -- # set +x 00:26:43.865 17:37:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:43.865 17:37:22 -- host/multicontroller.sh@100 -- # killprocess 41343 00:26:43.865 17:37:22 -- common/autotest_common.sh@926 -- # '[' -z 41343 ']' 00:26:43.865 17:37:22 -- common/autotest_common.sh@930 -- # kill -0 41343 00:26:43.865 17:37:22 -- common/autotest_common.sh@931 -- # uname 00:26:43.865 17:37:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:43.865 17:37:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41343 00:26:43.865 17:37:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:43.865 17:37:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:43.865 17:37:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41343' 00:26:43.865 killing process with pid 41343 00:26:43.865 17:37:22 -- common/autotest_common.sh@945 -- # kill 41343 00:26:43.865 17:37:22 -- common/autotest_common.sh@950 -- # wait 41343 00:26:43.865 17:37:22 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.865 17:37:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:43.865 17:37:22 -- common/autotest_common.sh@10 -- # set +x 00:26:43.865 17:37:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:43.865 17:37:22 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:43.865 17:37:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:43.865 17:37:22 -- common/autotest_common.sh@10 -- # set +x 00:26:43.865 17:37:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:43.865 17:37:22 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:43.865 17:37:22 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.124 17:37:22 -- common/autotest_common.sh@1597 -- # read -r file 00:26:44.124 17:37:22 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:44.124 17:37:22 -- common/autotest_common.sh@1596 -- # sort -u 00:26:44.124 17:37:22 -- common/autotest_common.sh@1598 -- # cat 00:26:44.124 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:44.124 [2024-07-12 17:37:19.994278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:44.124 [2024-07-12 17:37:19.994338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41343 ] 00:26:44.124 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.124 [2024-07-12 17:37:20.079935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.124 [2024-07-12 17:37:20.122721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.124 [2024-07-12 17:37:21.398719] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 4c66e6c9-1b5b-48a8-9975-25b3d6fc6d5a already exists 00:26:44.124 [2024-07-12 17:37:21.398754] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:4c66e6c9-1b5b-48a8-9975-25b3d6fc6d5a alias for bdev NVMe1n1 00:26:44.124 [2024-07-12 17:37:21.398767] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:44.124 Running I/O for 1 seconds... 00:26:44.124 00:26:44.124 Latency(us) 00:26:44.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.124 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:44.124 NVMe0n1 : 1.01 17155.69 67.01 0.00 0.00 7448.58 4647.10 14715.81 00:26:44.125 =================================================================================================================== 00:26:44.125 Total : 17155.69 67.01 0.00 0.00 7448.58 4647.10 14715.81 00:26:44.125 Received shutdown signal, test time was about 1.000000 seconds 00:26:44.125 00:26:44.125 Latency(us) 00:26:44.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.125 =================================================================================================================== 00:26:44.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.125 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:44.125 17:37:22 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.125 17:37:22 -- common/autotest_common.sh@1597 -- # read -r file 00:26:44.125 17:37:22 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:44.125 17:37:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:44.125 17:37:22 -- nvmf/common.sh@116 -- # sync 00:26:44.125 17:37:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:44.125 17:37:22 -- nvmf/common.sh@119 -- # set +e 00:26:44.125 17:37:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:44.125 17:37:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:44.125 rmmod nvme_tcp 00:26:44.125 rmmod nvme_fabrics 00:26:44.125 rmmod nvme_keyring 00:26:44.125 17:37:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:44.125 17:37:22 -- nvmf/common.sh@123 -- # set -e 00:26:44.125 17:37:22 -- nvmf/common.sh@124 -- # return 0 00:26:44.125 17:37:22 -- nvmf/common.sh@477 -- # '[' -n 41161 ']' 00:26:44.125 17:37:22 -- nvmf/common.sh@478 -- # killprocess 41161 00:26:44.125 17:37:22 -- common/autotest_common.sh@926 -- # '[' -z 41161 ']' 00:26:44.125 17:37:22 -- common/autotest_common.sh@930 -- # kill -0 41161 00:26:44.125 17:37:22 -- common/autotest_common.sh@931 -- # uname 00:26:44.125 17:37:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:44.125 17:37:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41161 00:26:44.125 17:37:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:44.125 17:37:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:44.125 17:37:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41161' 00:26:44.125 killing process with pid 41161 00:26:44.125 17:37:22 -- common/autotest_common.sh@945 -- # kill 41161 00:26:44.125 17:37:22 -- common/autotest_common.sh@950 -- # wait 41161 00:26:44.384 17:37:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:44.384 17:37:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:44.384 17:37:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:44.384 17:37:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.384 17:37:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:44.384 17:37:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.384 17:37:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.384 17:37:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.290 17:37:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:46.290 00:26:46.290 real 0m12.217s 00:26:46.290 user 0m17.539s 00:26:46.290 sys 0m5.031s 00:26:46.290 17:37:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.290 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:26:46.290 ************************************ 00:26:46.290 END TEST nvmf_multicontroller 00:26:46.290 ************************************ 00:26:46.549 17:37:25 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:46.549 17:37:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:46.549 17:37:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:46.549 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:26:46.549 ************************************ 00:26:46.549 START TEST nvmf_aer 00:26:46.549 ************************************ 00:26:46.549 17:37:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:46.549 * Looking for test storage... 00:26:46.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.549 17:37:25 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.549 17:37:25 -- nvmf/common.sh@7 -- # uname -s 00:26:46.549 17:37:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.549 17:37:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.549 17:37:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.549 17:37:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.549 17:37:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.549 17:37:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.549 17:37:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.549 17:37:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.549 17:37:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.549 17:37:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.549 17:37:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:46.549 17:37:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:46.549 17:37:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.549 17:37:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.549 17:37:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.549 17:37:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.549 17:37:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.549 17:37:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.549 17:37:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.549 17:37:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.549 17:37:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.549 17:37:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.549 17:37:25 -- paths/export.sh@5 -- # export PATH 00:26:46.549 17:37:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.549 17:37:25 -- nvmf/common.sh@46 -- # : 0 00:26:46.549 17:37:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:46.549 17:37:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:46.549 17:37:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:46.549 17:37:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.549 17:37:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.549 17:37:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:46.549 17:37:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:46.549 17:37:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:46.549 17:37:25 -- host/aer.sh@11 -- # nvmftestinit 00:26:46.549 17:37:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:46.549 17:37:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.549 17:37:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:46.549 17:37:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:46.549 17:37:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:46.549 17:37:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.549 17:37:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.549 17:37:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.549 17:37:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:46.549 17:37:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:46.549 17:37:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:46.549 17:37:25 -- common/autotest_common.sh@10 -- # set +x 00:26:51.824 17:37:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:51.824 17:37:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:51.824 17:37:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:51.824 17:37:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:51.824 17:37:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:51.824 17:37:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:51.824 17:37:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:51.824 17:37:30 -- nvmf/common.sh@294 -- # net_devs=() 00:26:51.824 17:37:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:51.824 17:37:30 -- nvmf/common.sh@295 -- # e810=() 00:26:51.824 17:37:30 -- nvmf/common.sh@295 -- # local -ga e810 00:26:51.824 17:37:30 -- nvmf/common.sh@296 -- # x722=() 00:26:51.824 17:37:30 -- nvmf/common.sh@296 -- # local -ga x722 00:26:51.824 17:37:30 -- nvmf/common.sh@297 -- # mlx=() 00:26:51.824 17:37:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:51.824 17:37:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.824 17:37:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:51.824 17:37:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:51.824 17:37:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:51.824 17:37:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:51.824 17:37:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:51.824 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:51.824 17:37:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:51.824 17:37:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:51.824 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:51.824 17:37:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:51.824 17:37:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:51.824 17:37:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:51.824 17:37:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.824 17:37:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:51.824 17:37:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.825 17:37:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:51.825 Found net devices under 0000:af:00.0: cvl_0_0 00:26:51.825 17:37:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.825 17:37:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:51.825 17:37:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.825 17:37:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:51.825 17:37:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.825 17:37:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:51.825 Found net devices under 0000:af:00.1: cvl_0_1 00:26:51.825 17:37:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.825 17:37:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:51.825 17:37:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:51.825 17:37:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:51.825 17:37:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:51.825 17:37:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:51.825 17:37:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.825 17:37:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.825 17:37:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.825 17:37:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:51.825 17:37:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.825 17:37:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.825 17:37:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:51.825 17:37:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.825 17:37:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.825 17:37:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:51.825 17:37:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:51.825 17:37:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.825 17:37:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.825 17:37:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.825 17:37:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.825 17:37:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:51.825 17:37:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.825 17:37:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.825 17:37:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.825 17:37:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:51.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:26:51.825 00:26:51.825 --- 10.0.0.2 ping statistics --- 00:26:51.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.825 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:26:51.825 17:37:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:26:51.825 00:26:51.825 --- 10.0.0.1 ping statistics --- 00:26:51.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.825 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:26:51.825 17:37:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.825 17:37:30 -- nvmf/common.sh@410 -- # return 0 00:26:51.825 17:37:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:51.825 17:37:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.825 17:37:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:51.825 17:37:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:51.825 17:37:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.825 17:37:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:51.825 17:37:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:51.825 17:37:30 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:51.825 17:37:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:51.825 17:37:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:51.825 17:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:51.825 17:37:30 -- nvmf/common.sh@469 -- # nvmfpid=45373 00:26:51.825 17:37:30 -- nvmf/common.sh@470 -- # waitforlisten 45373 00:26:51.825 17:37:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:51.825 17:37:30 -- common/autotest_common.sh@819 -- # '[' -z 45373 ']' 00:26:51.825 17:37:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.825 17:37:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:51.825 17:37:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.825 17:37:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:51.825 17:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:51.825 [2024-07-12 17:37:30.570006] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:51.825 [2024-07-12 17:37:30.570060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.825 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.825 [2024-07-12 17:37:30.656179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.825 [2024-07-12 17:37:30.699888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:51.825 [2024-07-12 17:37:30.700034] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.825 [2024-07-12 17:37:30.700046] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.825 [2024-07-12 17:37:30.700058] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.825 [2024-07-12 17:37:30.700098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.825 [2024-07-12 17:37:30.700200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.825 [2024-07-12 17:37:30.700279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.825 [2024-07-12 17:37:30.700289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.762 17:37:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:52.762 17:37:31 -- common/autotest_common.sh@852 -- # return 0 00:26:52.762 17:37:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:52.762 17:37:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:52.762 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.762 17:37:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.762 17:37:31 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.762 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.762 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.762 [2024-07-12 17:37:31.545189] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.762 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.762 17:37:31 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:52.762 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.762 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.762 Malloc0 00:26:52.762 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.762 17:37:31 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:52.762 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.762 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.762 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.762 17:37:31 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.762 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.762 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.762 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.762 17:37:31 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.762 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.762 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.762 [2024-07-12 17:37:31.600824] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.762 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.762 17:37:31 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:52.762 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.762 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.762 [2024-07-12 17:37:31.608591] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:52.762 [ 00:26:52.762 { 00:26:52.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:52.762 "subtype": "Discovery", 00:26:52.762 "listen_addresses": [], 00:26:52.762 "allow_any_host": true, 00:26:52.762 "hosts": [] 00:26:52.762 }, 00:26:52.762 { 00:26:52.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.762 "subtype": "NVMe", 00:26:52.762 "listen_addresses": [ 00:26:52.762 { 00:26:52.762 "transport": "TCP", 00:26:52.762 "trtype": "TCP", 00:26:52.762 "adrfam": "IPv4", 00:26:52.762 "traddr": "10.0.0.2", 00:26:52.762 "trsvcid": "4420" 00:26:52.762 } 00:26:52.762 ], 00:26:52.762 "allow_any_host": true, 00:26:52.762 "hosts": [], 00:26:52.762 "serial_number": "SPDK00000000000001", 00:26:52.762 "model_number": "SPDK bdev Controller", 00:26:52.762 "max_namespaces": 2, 00:26:52.762 "min_cntlid": 1, 00:26:52.762 "max_cntlid": 65519, 00:26:52.762 "namespaces": [ 00:26:52.762 { 00:26:52.762 "nsid": 1, 00:26:52.762 "bdev_name": "Malloc0", 00:26:52.762 "name": "Malloc0", 00:26:52.762 "nguid": "FF12765F523442CBBCAF883D24356B2B", 00:26:52.762 "uuid": "ff12765f-5234-42cb-bcaf-883d24356b2b" 00:26:52.762 } 00:26:52.762 ] 00:26:52.762 } 00:26:52.762 ] 00:26:52.762 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.762 17:37:31 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:52.762 17:37:31 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:52.762 17:37:31 -- host/aer.sh@33 -- # aerpid=45657 00:26:52.762 17:37:31 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:52.762 17:37:31 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:52.762 17:37:31 -- common/autotest_common.sh@1244 -- # local i=0 00:26:52.762 17:37:31 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:52.762 17:37:31 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:26:52.762 17:37:31 -- common/autotest_common.sh@1247 -- # i=1 00:26:52.762 17:37:31 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:52.762 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.762 17:37:31 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:52.762 17:37:31 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:26:52.762 17:37:31 -- common/autotest_common.sh@1247 -- # i=2 00:26:52.762 17:37:31 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:53.021 17:37:31 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.021 17:37:31 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.021 17:37:31 -- common/autotest_common.sh@1255 -- # return 0 00:26:53.021 17:37:31 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:53.021 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.021 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:53.021 Malloc1 00:26:53.021 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.021 17:37:31 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:53.021 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.021 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:53.021 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.021 17:37:31 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:53.021 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.021 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:53.021 Asynchronous Event Request test 00:26:53.021 Attaching to 10.0.0.2 00:26:53.021 Attached to 10.0.0.2 00:26:53.021 Registering asynchronous event callbacks... 00:26:53.021 Starting namespace attribute notice tests for all controllers... 00:26:53.022 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:53.022 aer_cb - Changed Namespace 00:26:53.022 Cleaning up... 00:26:53.022 [ 00:26:53.022 { 00:26:53.022 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:53.022 "subtype": "Discovery", 00:26:53.022 "listen_addresses": [], 00:26:53.022 "allow_any_host": true, 00:26:53.022 "hosts": [] 00:26:53.022 }, 00:26:53.022 { 00:26:53.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.022 "subtype": "NVMe", 00:26:53.022 "listen_addresses": [ 00:26:53.022 { 00:26:53.022 "transport": "TCP", 00:26:53.022 "trtype": "TCP", 00:26:53.022 "adrfam": "IPv4", 00:26:53.022 "traddr": "10.0.0.2", 00:26:53.022 "trsvcid": "4420" 00:26:53.022 } 00:26:53.022 ], 00:26:53.022 "allow_any_host": true, 00:26:53.022 "hosts": [], 00:26:53.022 "serial_number": "SPDK00000000000001", 00:26:53.022 "model_number": "SPDK bdev Controller", 00:26:53.022 "max_namespaces": 2, 00:26:53.022 "min_cntlid": 1, 00:26:53.022 "max_cntlid": 65519, 00:26:53.022 "namespaces": [ 00:26:53.022 { 00:26:53.022 "nsid": 1, 00:26:53.022 "bdev_name": "Malloc0", 00:26:53.022 "name": "Malloc0", 00:26:53.022 "nguid": "FF12765F523442CBBCAF883D24356B2B", 00:26:53.022 "uuid": "ff12765f-5234-42cb-bcaf-883d24356b2b" 00:26:53.022 }, 00:26:53.022 { 00:26:53.022 "nsid": 2, 00:26:53.022 "bdev_name": "Malloc1", 00:26:53.022 "name": "Malloc1", 00:26:53.022 "nguid": "FDF8D281322C440C8E5A39069D148C92", 00:26:53.022 "uuid": "fdf8d281-322c-440c-8e5a-39069d148c92" 00:26:53.022 } 00:26:53.022 ] 00:26:53.022 } 00:26:53.022 ] 00:26:53.022 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.022 17:37:31 -- host/aer.sh@43 -- # wait 45657 00:26:53.022 17:37:31 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:53.022 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.022 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:53.022 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.022 17:37:31 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:53.022 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.022 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:53.022 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.022 17:37:31 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.022 17:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.022 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:26:53.022 17:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.022 17:37:31 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:53.022 17:37:31 -- host/aer.sh@51 -- # nvmftestfini 00:26:53.022 17:37:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:53.022 17:37:31 -- nvmf/common.sh@116 -- # sync 00:26:53.022 17:37:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:53.022 17:37:31 -- nvmf/common.sh@119 -- # set +e 00:26:53.022 17:37:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:53.022 17:37:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:53.022 rmmod nvme_tcp 00:26:53.281 rmmod nvme_fabrics 00:26:53.281 rmmod nvme_keyring 00:26:53.281 17:37:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:53.281 17:37:32 -- nvmf/common.sh@123 -- # set -e 00:26:53.281 17:37:32 -- nvmf/common.sh@124 -- # return 0 00:26:53.281 17:37:32 -- nvmf/common.sh@477 -- # '[' -n 45373 ']' 00:26:53.281 17:37:32 -- nvmf/common.sh@478 -- # killprocess 45373 00:26:53.281 17:37:32 -- common/autotest_common.sh@926 -- # '[' -z 45373 ']' 00:26:53.281 17:37:32 -- common/autotest_common.sh@930 -- # kill -0 45373 00:26:53.281 17:37:32 -- common/autotest_common.sh@931 -- # uname 00:26:53.281 17:37:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:53.281 17:37:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 45373 00:26:53.281 17:37:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:53.281 17:37:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:53.281 17:37:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45373' 00:26:53.281 killing process with pid 45373 00:26:53.281 17:37:32 -- common/autotest_common.sh@945 -- # kill 45373 00:26:53.281 [2024-07-12 17:37:32.092839] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:53.281 17:37:32 -- common/autotest_common.sh@950 -- # wait 45373 00:26:53.541 17:37:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:53.541 17:37:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:53.541 17:37:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:53.541 17:37:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:53.541 17:37:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:53.541 17:37:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.541 17:37:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.541 17:37:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.445 17:37:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:55.445 00:26:55.445 real 0m9.059s 00:26:55.445 user 0m7.774s 00:26:55.445 sys 0m4.258s 00:26:55.445 17:37:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.445 17:37:34 -- common/autotest_common.sh@10 -- # set +x 00:26:55.445 ************************************ 00:26:55.445 END TEST nvmf_aer 00:26:55.445 ************************************ 00:26:55.445 17:37:34 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:55.445 17:37:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:55.445 17:37:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.445 17:37:34 -- common/autotest_common.sh@10 -- # set +x 00:26:55.445 ************************************ 00:26:55.445 START TEST nvmf_async_init 00:26:55.445 ************************************ 00:26:55.445 17:37:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:55.704 * Looking for test storage... 00:26:55.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.704 17:37:34 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.704 17:37:34 -- nvmf/common.sh@7 -- # uname -s 00:26:55.704 17:37:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.704 17:37:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.704 17:37:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.704 17:37:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.704 17:37:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.704 17:37:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.704 17:37:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.704 17:37:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.704 17:37:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.704 17:37:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.704 17:37:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:55.704 17:37:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:55.704 17:37:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.704 17:37:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.704 17:37:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.704 17:37:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.704 17:37:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.704 17:37:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.704 17:37:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.704 17:37:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.704 17:37:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.704 17:37:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.704 17:37:34 -- paths/export.sh@5 -- # export PATH 00:26:55.705 17:37:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.705 17:37:34 -- nvmf/common.sh@46 -- # : 0 00:26:55.705 17:37:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:55.705 17:37:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:55.705 17:37:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:55.705 17:37:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.705 17:37:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.705 17:37:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:55.705 17:37:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:55.705 17:37:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:55.705 17:37:34 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:55.705 17:37:34 -- host/async_init.sh@14 -- # null_block_size=512 00:26:55.705 17:37:34 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:55.705 17:37:34 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:55.705 17:37:34 -- host/async_init.sh@20 -- # uuidgen 00:26:55.705 17:37:34 -- host/async_init.sh@20 -- # tr -d - 00:26:55.705 17:37:34 -- host/async_init.sh@20 -- # nguid=7b605976c4ed4584837b7a69b653b4db 00:26:55.705 17:37:34 -- host/async_init.sh@22 -- # nvmftestinit 00:26:55.705 17:37:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:55.705 17:37:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.705 17:37:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:55.705 17:37:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:55.705 17:37:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:55.705 17:37:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.705 17:37:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.705 17:37:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.705 17:37:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:55.705 17:37:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:55.705 17:37:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:55.705 17:37:34 -- common/autotest_common.sh@10 -- # set +x 00:27:00.978 17:37:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:00.978 17:37:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:00.978 17:37:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:00.978 17:37:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:00.978 17:37:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:00.978 17:37:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:00.978 17:37:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:00.978 17:37:39 -- nvmf/common.sh@294 -- # net_devs=() 00:27:00.978 17:37:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:00.978 17:37:39 -- nvmf/common.sh@295 -- # e810=() 00:27:00.978 17:37:39 -- nvmf/common.sh@295 -- # local -ga e810 00:27:00.978 17:37:39 -- nvmf/common.sh@296 -- # x722=() 00:27:00.978 17:37:39 -- nvmf/common.sh@296 -- # local -ga x722 00:27:00.978 17:37:39 -- nvmf/common.sh@297 -- # mlx=() 00:27:00.978 17:37:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:00.978 17:37:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.978 17:37:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:00.978 17:37:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:00.978 17:37:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:00.978 17:37:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:00.978 17:37:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:00.978 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:00.978 17:37:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:00.978 17:37:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:00.978 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:00.978 17:37:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:00.978 17:37:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:00.978 17:37:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.978 17:37:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:00.978 17:37:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.978 17:37:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:00.978 Found net devices under 0000:af:00.0: cvl_0_0 00:27:00.978 17:37:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.978 17:37:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:00.978 17:37:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.978 17:37:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:00.978 17:37:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.978 17:37:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:00.978 Found net devices under 0000:af:00.1: cvl_0_1 00:27:00.978 17:37:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.978 17:37:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:00.978 17:37:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:00.978 17:37:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:00.978 17:37:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:00.978 17:37:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.978 17:37:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.978 17:37:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.978 17:37:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:00.978 17:37:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.978 17:37:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.978 17:37:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:00.978 17:37:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.978 17:37:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.978 17:37:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:00.978 17:37:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:00.978 17:37:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.978 17:37:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.978 17:37:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.978 17:37:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.978 17:37:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:00.978 17:37:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.978 17:37:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.978 17:37:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.257 17:37:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:01.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:27:01.257 00:27:01.257 --- 10.0.0.2 ping statistics --- 00:27:01.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.257 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:27:01.257 17:37:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:01.257 00:27:01.257 --- 10.0.0.1 ping statistics --- 00:27:01.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.257 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:01.257 17:37:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.257 17:37:39 -- nvmf/common.sh@410 -- # return 0 00:27:01.257 17:37:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:01.257 17:37:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.257 17:37:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:01.257 17:37:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:01.257 17:37:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.257 17:37:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:01.257 17:37:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:01.257 17:37:40 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:01.257 17:37:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:01.257 17:37:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:01.257 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:01.257 17:37:40 -- nvmf/common.sh@469 -- # nvmfpid=49341 00:27:01.257 17:37:40 -- nvmf/common.sh@470 -- # waitforlisten 49341 00:27:01.257 17:37:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:01.257 17:37:40 -- common/autotest_common.sh@819 -- # '[' -z 49341 ']' 00:27:01.257 17:37:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.257 17:37:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:01.257 17:37:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.257 17:37:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:01.257 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:01.257 [2024-07-12 17:37:40.050125] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:01.257 [2024-07-12 17:37:40.050182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.257 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.257 [2024-07-12 17:37:40.137384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.257 [2024-07-12 17:37:40.180001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:01.257 [2024-07-12 17:37:40.180145] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.257 [2024-07-12 17:37:40.180157] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.257 [2024-07-12 17:37:40.180166] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.257 [2024-07-12 17:37:40.180188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.245 17:37:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:02.245 17:37:40 -- common/autotest_common.sh@852 -- # return 0 00:27:02.245 17:37:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:02.245 17:37:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:02.245 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 17:37:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.245 17:37:40 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:02.245 17:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.245 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 [2024-07-12 17:37:40.928102] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.245 17:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.245 17:37:40 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:02.245 17:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.245 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 null0 00:27:02.245 17:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.245 17:37:40 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:02.245 17:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.245 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 17:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.245 17:37:40 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:02.245 17:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.245 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 17:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.245 17:37:40 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7b605976c4ed4584837b7a69b653b4db 00:27:02.245 17:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.245 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 17:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.245 17:37:40 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:02.245 17:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.245 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 [2024-07-12 17:37:40.968335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.245 17:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.245 17:37:40 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:02.245 17:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.245 17:37:40 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 nvme0n1 00:27:02.245 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.245 17:37:41 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:02.245 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.245 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.245 [ 00:27:02.245 { 00:27:02.245 "name": "nvme0n1", 00:27:02.245 "aliases": [ 00:27:02.245 "7b605976-c4ed-4584-837b-7a69b653b4db" 00:27:02.245 ], 00:27:02.245 "product_name": "NVMe disk", 00:27:02.245 "block_size": 512, 00:27:02.245 "num_blocks": 2097152, 00:27:02.245 "uuid": "7b605976-c4ed-4584-837b-7a69b653b4db", 00:27:02.245 "assigned_rate_limits": { 00:27:02.245 "rw_ios_per_sec": 0, 00:27:02.245 "rw_mbytes_per_sec": 0, 00:27:02.245 "r_mbytes_per_sec": 0, 00:27:02.245 "w_mbytes_per_sec": 0 00:27:02.245 }, 00:27:02.245 "claimed": false, 00:27:02.245 "zoned": false, 00:27:02.245 "supported_io_types": { 00:27:02.245 "read": true, 00:27:02.245 "write": true, 00:27:02.245 "unmap": false, 00:27:02.245 "write_zeroes": true, 00:27:02.245 "flush": true, 00:27:02.245 "reset": true, 00:27:02.245 "compare": true, 00:27:02.245 "compare_and_write": true, 00:27:02.245 "abort": true, 00:27:02.245 "nvme_admin": true, 00:27:02.245 "nvme_io": true 00:27:02.245 }, 00:27:02.245 "driver_specific": { 00:27:02.245 "nvme": [ 00:27:02.245 { 00:27:02.245 "trid": { 00:27:02.245 "trtype": "TCP", 00:27:02.245 "adrfam": "IPv4", 00:27:02.246 "traddr": "10.0.0.2", 00:27:02.246 "trsvcid": "4420", 00:27:02.246 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:02.246 }, 00:27:02.246 "ctrlr_data": { 00:27:02.246 "cntlid": 1, 00:27:02.246 "vendor_id": "0x8086", 00:27:02.246 "model_number": "SPDK bdev Controller", 00:27:02.246 "serial_number": "00000000000000000000", 00:27:02.246 "firmware_revision": "24.01.1", 00:27:02.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.246 "oacs": { 00:27:02.246 "security": 0, 00:27:02.246 "format": 0, 00:27:02.246 "firmware": 0, 00:27:02.246 "ns_manage": 0 00:27:02.246 }, 00:27:02.246 "multi_ctrlr": true, 00:27:02.246 "ana_reporting": false 00:27:02.246 }, 00:27:02.246 "vs": { 00:27:02.246 "nvme_version": "1.3" 00:27:02.246 }, 00:27:02.246 "ns_data": { 00:27:02.246 "id": 1, 00:27:02.246 "can_share": true 00:27:02.246 } 00:27:02.246 } 00:27:02.246 ], 00:27:02.246 "mp_policy": "active_passive" 00:27:02.246 } 00:27:02.246 } 00:27:02.246 ] 00:27:02.246 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.246 17:37:41 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:02.505 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.505 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.505 [2024-07-12 17:37:41.216865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:02.505 [2024-07-12 17:37:41.216938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2deb0 (9): Bad file descriptor 00:27:02.505 [2024-07-12 17:37:41.349379] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:02.505 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.505 17:37:41 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:02.505 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.505 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.505 [ 00:27:02.505 { 00:27:02.505 "name": "nvme0n1", 00:27:02.505 "aliases": [ 00:27:02.505 "7b605976-c4ed-4584-837b-7a69b653b4db" 00:27:02.505 ], 00:27:02.505 "product_name": "NVMe disk", 00:27:02.505 "block_size": 512, 00:27:02.505 "num_blocks": 2097152, 00:27:02.505 "uuid": "7b605976-c4ed-4584-837b-7a69b653b4db", 00:27:02.505 "assigned_rate_limits": { 00:27:02.505 "rw_ios_per_sec": 0, 00:27:02.505 "rw_mbytes_per_sec": 0, 00:27:02.505 "r_mbytes_per_sec": 0, 00:27:02.505 "w_mbytes_per_sec": 0 00:27:02.505 }, 00:27:02.505 "claimed": false, 00:27:02.505 "zoned": false, 00:27:02.505 "supported_io_types": { 00:27:02.505 "read": true, 00:27:02.505 "write": true, 00:27:02.505 "unmap": false, 00:27:02.505 "write_zeroes": true, 00:27:02.505 "flush": true, 00:27:02.505 "reset": true, 00:27:02.505 "compare": true, 00:27:02.505 "compare_and_write": true, 00:27:02.505 "abort": true, 00:27:02.505 "nvme_admin": true, 00:27:02.505 "nvme_io": true 00:27:02.505 }, 00:27:02.505 "driver_specific": { 00:27:02.505 "nvme": [ 00:27:02.505 { 00:27:02.505 "trid": { 00:27:02.505 "trtype": "TCP", 00:27:02.505 "adrfam": "IPv4", 00:27:02.505 "traddr": "10.0.0.2", 00:27:02.505 "trsvcid": "4420", 00:27:02.505 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:02.505 }, 00:27:02.505 "ctrlr_data": { 00:27:02.505 "cntlid": 2, 00:27:02.505 "vendor_id": "0x8086", 00:27:02.505 "model_number": "SPDK bdev Controller", 00:27:02.505 "serial_number": "00000000000000000000", 00:27:02.505 "firmware_revision": "24.01.1", 00:27:02.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.505 "oacs": { 00:27:02.505 "security": 0, 00:27:02.505 "format": 0, 00:27:02.505 "firmware": 0, 00:27:02.505 "ns_manage": 0 00:27:02.505 }, 00:27:02.505 "multi_ctrlr": true, 00:27:02.505 "ana_reporting": false 00:27:02.505 }, 00:27:02.505 "vs": { 00:27:02.505 "nvme_version": "1.3" 00:27:02.505 }, 00:27:02.505 "ns_data": { 00:27:02.505 "id": 1, 00:27:02.505 "can_share": true 00:27:02.505 } 00:27:02.505 } 00:27:02.505 ], 00:27:02.505 "mp_policy": "active_passive" 00:27:02.505 } 00:27:02.505 } 00:27:02.505 ] 00:27:02.505 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.505 17:37:41 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.505 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.505 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.505 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.505 17:37:41 -- host/async_init.sh@53 -- # mktemp 00:27:02.505 17:37:41 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NpK1CGmiCl 00:27:02.505 17:37:41 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:02.505 17:37:41 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NpK1CGmiCl 00:27:02.505 17:37:41 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:02.505 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.505 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.505 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.505 17:37:41 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:02.505 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.505 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.505 [2024-07-12 17:37:41.397519] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:02.505 [2024-07-12 17:37:41.397657] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:02.505 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.505 17:37:41 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NpK1CGmiCl 00:27:02.505 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.505 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.505 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.505 17:37:41 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NpK1CGmiCl 00:27:02.505 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.505 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.505 [2024-07-12 17:37:41.413563] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:02.778 nvme0n1 00:27:02.778 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.778 17:37:41 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:02.778 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.778 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.778 [ 00:27:02.778 { 00:27:02.778 "name": "nvme0n1", 00:27:02.778 "aliases": [ 00:27:02.778 "7b605976-c4ed-4584-837b-7a69b653b4db" 00:27:02.778 ], 00:27:02.778 "product_name": "NVMe disk", 00:27:02.778 "block_size": 512, 00:27:02.778 "num_blocks": 2097152, 00:27:02.778 "uuid": "7b605976-c4ed-4584-837b-7a69b653b4db", 00:27:02.778 "assigned_rate_limits": { 00:27:02.778 "rw_ios_per_sec": 0, 00:27:02.778 "rw_mbytes_per_sec": 0, 00:27:02.778 "r_mbytes_per_sec": 0, 00:27:02.778 "w_mbytes_per_sec": 0 00:27:02.778 }, 00:27:02.778 "claimed": false, 00:27:02.778 "zoned": false, 00:27:02.778 "supported_io_types": { 00:27:02.778 "read": true, 00:27:02.778 "write": true, 00:27:02.778 "unmap": false, 00:27:02.778 "write_zeroes": true, 00:27:02.778 "flush": true, 00:27:02.779 "reset": true, 00:27:02.779 "compare": true, 00:27:02.779 "compare_and_write": true, 00:27:02.779 "abort": true, 00:27:02.779 "nvme_admin": true, 00:27:02.779 "nvme_io": true 00:27:02.779 }, 00:27:02.779 "driver_specific": { 00:27:02.779 "nvme": [ 00:27:02.779 { 00:27:02.779 "trid": { 00:27:02.779 "trtype": "TCP", 00:27:02.779 "adrfam": "IPv4", 00:27:02.779 "traddr": "10.0.0.2", 00:27:02.779 "trsvcid": "4421", 00:27:02.779 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:02.779 }, 00:27:02.779 "ctrlr_data": { 00:27:02.779 "cntlid": 3, 00:27:02.779 "vendor_id": "0x8086", 00:27:02.779 "model_number": "SPDK bdev Controller", 00:27:02.779 "serial_number": "00000000000000000000", 00:27:02.779 "firmware_revision": "24.01.1", 00:27:02.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.779 "oacs": { 00:27:02.779 "security": 0, 00:27:02.780 "format": 0, 00:27:02.780 "firmware": 0, 00:27:02.780 "ns_manage": 0 00:27:02.780 }, 00:27:02.780 "multi_ctrlr": true, 00:27:02.780 "ana_reporting": false 00:27:02.780 }, 00:27:02.780 "vs": { 00:27:02.780 "nvme_version": "1.3" 00:27:02.780 }, 00:27:02.780 "ns_data": { 00:27:02.780 "id": 1, 00:27:02.780 "can_share": true 00:27:02.780 } 00:27:02.780 } 00:27:02.780 ], 00:27:02.780 "mp_policy": "active_passive" 00:27:02.780 } 00:27:02.780 } 00:27:02.780 ] 00:27:02.780 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.780 17:37:41 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.780 17:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.780 17:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.780 17:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.780 17:37:41 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.NpK1CGmiCl 00:27:02.780 17:37:41 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:02.780 17:37:41 -- host/async_init.sh@78 -- # nvmftestfini 00:27:02.780 17:37:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:02.780 17:37:41 -- nvmf/common.sh@116 -- # sync 00:27:02.780 17:37:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:02.780 17:37:41 -- nvmf/common.sh@119 -- # set +e 00:27:02.780 17:37:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:02.780 17:37:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:02.780 rmmod nvme_tcp 00:27:02.780 rmmod nvme_fabrics 00:27:02.780 rmmod nvme_keyring 00:27:02.780 17:37:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:02.780 17:37:41 -- nvmf/common.sh@123 -- # set -e 00:27:02.780 17:37:41 -- nvmf/common.sh@124 -- # return 0 00:27:02.780 17:37:41 -- nvmf/common.sh@477 -- # '[' -n 49341 ']' 00:27:02.780 17:37:41 -- nvmf/common.sh@478 -- # killprocess 49341 00:27:02.780 17:37:41 -- common/autotest_common.sh@926 -- # '[' -z 49341 ']' 00:27:02.780 17:37:41 -- common/autotest_common.sh@930 -- # kill -0 49341 00:27:02.780 17:37:41 -- common/autotest_common.sh@931 -- # uname 00:27:02.780 17:37:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:02.780 17:37:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 49341 00:27:02.781 17:37:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:02.781 17:37:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:02.781 17:37:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49341' 00:27:02.781 killing process with pid 49341 00:27:02.781 17:37:41 -- common/autotest_common.sh@945 -- # kill 49341 00:27:02.781 17:37:41 -- common/autotest_common.sh@950 -- # wait 49341 00:27:03.042 17:37:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:03.042 17:37:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:03.042 17:37:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:03.042 17:37:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.042 17:37:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:03.042 17:37:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.042 17:37:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.042 17:37:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.946 17:37:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:04.946 00:27:04.946 real 0m9.460s 00:27:04.946 user 0m3.436s 00:27:04.946 sys 0m4.528s 00:27:04.946 17:37:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.946 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.946 ************************************ 00:27:04.946 END TEST nvmf_async_init 00:27:04.946 ************************************ 00:27:04.946 17:37:43 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:04.946 17:37:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:04.946 17:37:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:04.946 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.946 ************************************ 00:27:04.946 START TEST dma 00:27:04.946 ************************************ 00:27:04.946 17:37:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:05.205 * Looking for test storage... 00:27:05.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:05.205 17:37:43 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.205 17:37:43 -- nvmf/common.sh@7 -- # uname -s 00:27:05.205 17:37:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.205 17:37:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.205 17:37:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.205 17:37:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.205 17:37:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.205 17:37:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.205 17:37:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.205 17:37:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.205 17:37:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.205 17:37:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.205 17:37:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:05.205 17:37:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:05.205 17:37:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.205 17:37:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.205 17:37:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.205 17:37:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.205 17:37:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.205 17:37:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.205 17:37:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.205 17:37:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.205 17:37:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.205 17:37:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.205 17:37:43 -- paths/export.sh@5 -- # export PATH 00:27:05.205 17:37:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.205 17:37:43 -- nvmf/common.sh@46 -- # : 0 00:27:05.205 17:37:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:05.205 17:37:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:05.205 17:37:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:05.205 17:37:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.205 17:37:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.206 17:37:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:05.206 17:37:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:05.206 17:37:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:05.206 17:37:43 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:05.206 17:37:43 -- host/dma.sh@13 -- # exit 0 00:27:05.206 00:27:05.206 real 0m0.109s 00:27:05.206 user 0m0.049s 00:27:05.206 sys 0m0.068s 00:27:05.206 17:37:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.206 17:37:43 -- common/autotest_common.sh@10 -- # set +x 00:27:05.206 ************************************ 00:27:05.206 END TEST dma 00:27:05.206 ************************************ 00:27:05.206 17:37:44 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:05.206 17:37:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:05.206 17:37:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.206 17:37:44 -- common/autotest_common.sh@10 -- # set +x 00:27:05.206 ************************************ 00:27:05.206 START TEST nvmf_identify 00:27:05.206 ************************************ 00:27:05.206 17:37:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:05.206 * Looking for test storage... 00:27:05.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:05.206 17:37:44 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.206 17:37:44 -- nvmf/common.sh@7 -- # uname -s 00:27:05.206 17:37:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.206 17:37:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.206 17:37:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.206 17:37:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.206 17:37:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.206 17:37:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.206 17:37:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.206 17:37:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.206 17:37:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.206 17:37:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.206 17:37:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:05.206 17:37:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:05.206 17:37:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.206 17:37:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.206 17:37:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.206 17:37:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.206 17:37:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.206 17:37:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.206 17:37:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.206 17:37:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.206 17:37:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.206 17:37:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.206 17:37:44 -- paths/export.sh@5 -- # export PATH 00:27:05.206 17:37:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.206 17:37:44 -- nvmf/common.sh@46 -- # : 0 00:27:05.206 17:37:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:05.206 17:37:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:05.206 17:37:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:05.206 17:37:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.206 17:37:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.206 17:37:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:05.206 17:37:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:05.206 17:37:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:05.206 17:37:44 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:05.206 17:37:44 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:05.206 17:37:44 -- host/identify.sh@14 -- # nvmftestinit 00:27:05.206 17:37:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:05.206 17:37:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.206 17:37:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:05.206 17:37:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:05.206 17:37:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:05.206 17:37:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.206 17:37:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.206 17:37:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.206 17:37:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:05.206 17:37:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:05.206 17:37:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:05.206 17:37:44 -- common/autotest_common.sh@10 -- # set +x 00:27:10.474 17:37:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:10.474 17:37:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:10.474 17:37:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:10.474 17:37:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:10.474 17:37:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:10.474 17:37:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:10.474 17:37:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:10.474 17:37:49 -- nvmf/common.sh@294 -- # net_devs=() 00:27:10.474 17:37:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:10.474 17:37:49 -- nvmf/common.sh@295 -- # e810=() 00:27:10.474 17:37:49 -- nvmf/common.sh@295 -- # local -ga e810 00:27:10.474 17:37:49 -- nvmf/common.sh@296 -- # x722=() 00:27:10.474 17:37:49 -- nvmf/common.sh@296 -- # local -ga x722 00:27:10.474 17:37:49 -- nvmf/common.sh@297 -- # mlx=() 00:27:10.474 17:37:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:10.474 17:37:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.474 17:37:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:10.474 17:37:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:10.474 17:37:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:10.474 17:37:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:10.474 17:37:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:10.474 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:10.474 17:37:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:10.474 17:37:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:10.474 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:10.474 17:37:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:10.474 17:37:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:10.474 17:37:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:10.474 17:37:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.474 17:37:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:10.475 17:37:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.475 17:37:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:10.475 Found net devices under 0000:af:00.0: cvl_0_0 00:27:10.475 17:37:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.475 17:37:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:10.475 17:37:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.475 17:37:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:10.475 17:37:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.475 17:37:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:10.475 Found net devices under 0000:af:00.1: cvl_0_1 00:27:10.475 17:37:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.475 17:37:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:10.475 17:37:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:10.475 17:37:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:10.475 17:37:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:10.475 17:37:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:10.475 17:37:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.475 17:37:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.475 17:37:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.475 17:37:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:10.475 17:37:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.475 17:37:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.475 17:37:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:10.475 17:37:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.475 17:37:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.475 17:37:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:10.475 17:37:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:10.475 17:37:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.475 17:37:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.755 17:37:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.755 17:37:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.755 17:37:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:10.755 17:37:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.755 17:37:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.755 17:37:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.755 17:37:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:10.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:27:10.755 00:27:10.755 --- 10.0.0.2 ping statistics --- 00:27:10.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.755 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:10.755 17:37:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:27:10.755 00:27:10.755 --- 10.0.0.1 ping statistics --- 00:27:10.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.755 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:27:10.755 17:37:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.755 17:37:49 -- nvmf/common.sh@410 -- # return 0 00:27:10.755 17:37:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:10.755 17:37:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.755 17:37:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:10.755 17:37:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:10.755 17:37:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.755 17:37:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:10.755 17:37:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:10.755 17:37:49 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:10.755 17:37:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:10.755 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:27:10.755 17:37:49 -- host/identify.sh@19 -- # nvmfpid=53194 00:27:10.755 17:37:49 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:10.755 17:37:49 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.755 17:37:49 -- host/identify.sh@23 -- # waitforlisten 53194 00:27:10.755 17:37:49 -- common/autotest_common.sh@819 -- # '[' -z 53194 ']' 00:27:10.755 17:37:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.755 17:37:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:10.755 17:37:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.755 17:37:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:10.755 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:27:11.014 [2024-07-12 17:37:49.765353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:11.014 [2024-07-12 17:37:49.765407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.014 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.014 [2024-07-12 17:37:49.853780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:11.014 [2024-07-12 17:37:49.898094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:11.014 [2024-07-12 17:37:49.898241] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.014 [2024-07-12 17:37:49.898252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.014 [2024-07-12 17:37:49.898268] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.014 [2024-07-12 17:37:49.898308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.014 [2024-07-12 17:37:49.898411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.014 [2024-07-12 17:37:49.898503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.014 [2024-07-12 17:37:49.898506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.952 17:37:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:11.952 17:37:50 -- common/autotest_common.sh@852 -- # return 0 00:27:11.952 17:37:50 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.952 17:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.952 17:37:50 -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 [2024-07-12 17:37:50.707071] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.952 17:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.952 17:37:50 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:11.952 17:37:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:11.952 17:37:50 -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 17:37:50 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:11.952 17:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.952 17:37:50 -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 Malloc0 00:27:11.952 17:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.952 17:37:50 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.952 17:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.952 17:37:50 -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 17:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.952 17:37:50 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:11.952 17:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.952 17:37:50 -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 17:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.952 17:37:50 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.952 17:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.952 17:37:50 -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 [2024-07-12 17:37:50.799147] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.952 17:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.952 17:37:50 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:11.952 17:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.952 17:37:50 -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 17:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.952 17:37:50 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:11.952 17:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.952 17:37:50 -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 [2024-07-12 17:37:50.814946] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:11.952 [ 00:27:11.952 { 00:27:11.952 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:11.952 "subtype": "Discovery", 00:27:11.952 "listen_addresses": [ 00:27:11.952 { 00:27:11.952 "transport": "TCP", 00:27:11.952 "trtype": "TCP", 00:27:11.952 "adrfam": "IPv4", 00:27:11.952 "traddr": "10.0.0.2", 00:27:11.952 "trsvcid": "4420" 00:27:11.952 } 00:27:11.952 ], 00:27:11.952 "allow_any_host": true, 00:27:11.952 "hosts": [] 00:27:11.952 }, 00:27:11.952 { 00:27:11.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.952 "subtype": "NVMe", 00:27:11.952 "listen_addresses": [ 00:27:11.952 { 00:27:11.952 "transport": "TCP", 00:27:11.952 "trtype": "TCP", 00:27:11.952 "adrfam": "IPv4", 00:27:11.952 "traddr": "10.0.0.2", 00:27:11.952 "trsvcid": "4420" 00:27:11.952 } 00:27:11.952 ], 00:27:11.952 "allow_any_host": true, 00:27:11.952 "hosts": [], 00:27:11.952 "serial_number": "SPDK00000000000001", 00:27:11.952 "model_number": "SPDK bdev Controller", 00:27:11.952 "max_namespaces": 32, 00:27:11.952 "min_cntlid": 1, 00:27:11.952 "max_cntlid": 65519, 00:27:11.952 "namespaces": [ 00:27:11.952 { 00:27:11.952 "nsid": 1, 00:27:11.952 "bdev_name": "Malloc0", 00:27:11.952 "name": "Malloc0", 00:27:11.952 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:11.952 "eui64": "ABCDEF0123456789", 00:27:11.952 "uuid": "fc556e02-f994-4238-835f-c58d0fd1e46e" 00:27:11.952 } 00:27:11.952 ] 00:27:11.952 } 00:27:11.952 ] 00:27:11.952 17:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.952 17:37:50 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:11.952 [2024-07-12 17:37:50.850580] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:11.952 [2024-07-12 17:37:50.850625] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53482 ] 00:27:11.952 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.952 [2024-07-12 17:37:50.888801] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:11.952 [2024-07-12 17:37:50.888858] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:11.952 [2024-07-12 17:37:50.888865] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:11.952 [2024-07-12 17:37:50.888878] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:11.952 [2024-07-12 17:37:50.888887] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:11.952 [2024-07-12 17:37:50.889268] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:11.952 [2024-07-12 17:37:50.889308] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xedbfd0 0 00:27:11.952 [2024-07-12 17:37:50.903268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:11.952 [2024-07-12 17:37:50.903282] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:11.952 [2024-07-12 17:37:50.903288] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:11.952 [2024-07-12 17:37:50.903292] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:11.952 [2024-07-12 17:37:50.903335] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.903342] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.903347] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.952 [2024-07-12 17:37:50.903363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:11.952 [2024-07-12 17:37:50.903383] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.952 [2024-07-12 17:37:50.911269] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.952 [2024-07-12 17:37:50.911282] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.952 [2024-07-12 17:37:50.911286] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.911292] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.952 [2024-07-12 17:37:50.911309] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:11.952 [2024-07-12 17:37:50.911317] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:11.952 [2024-07-12 17:37:50.911328] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:11.952 [2024-07-12 17:37:50.911342] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.911348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.911352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.952 [2024-07-12 17:37:50.911363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.952 [2024-07-12 17:37:50.911379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.952 [2024-07-12 17:37:50.911580] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.952 [2024-07-12 17:37:50.911588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.952 [2024-07-12 17:37:50.911593] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.911598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.952 [2024-07-12 17:37:50.911604] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:11.952 [2024-07-12 17:37:50.911614] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:11.952 [2024-07-12 17:37:50.911623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.911628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.911633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.952 [2024-07-12 17:37:50.911642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.952 [2024-07-12 17:37:50.911655] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.952 [2024-07-12 17:37:50.911761] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.952 [2024-07-12 17:37:50.911769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.952 [2024-07-12 17:37:50.911774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.911779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.952 [2024-07-12 17:37:50.911785] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:11.952 [2024-07-12 17:37:50.911796] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:11.952 [2024-07-12 17:37:50.911805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.952 [2024-07-12 17:37:50.911809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.911814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.911823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.953 [2024-07-12 17:37:50.911836] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.953 [2024-07-12 17:37:50.911922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.953 [2024-07-12 17:37:50.911931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.953 [2024-07-12 17:37:50.911935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.911940] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.953 [2024-07-12 17:37:50.911947] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:11.953 [2024-07-12 17:37:50.911961] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.911967] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.911972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.911980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.953 [2024-07-12 17:37:50.911994] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.953 [2024-07-12 17:37:50.912072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.953 [2024-07-12 17:37:50.912080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.953 [2024-07-12 17:37:50.912085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912090] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.953 [2024-07-12 17:37:50.912096] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:11.953 [2024-07-12 17:37:50.912102] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:11.953 [2024-07-12 17:37:50.912111] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:11.953 [2024-07-12 17:37:50.912218] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:11.953 [2024-07-12 17:37:50.912225] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:11.953 [2024-07-12 17:37:50.912234] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912239] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.912253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.953 [2024-07-12 17:37:50.912274] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.953 [2024-07-12 17:37:50.912369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.953 [2024-07-12 17:37:50.912377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.953 [2024-07-12 17:37:50.912382] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912386] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.953 [2024-07-12 17:37:50.912392] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:11.953 [2024-07-12 17:37:50.912403] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912409] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912414] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.912422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.953 [2024-07-12 17:37:50.912436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.953 [2024-07-12 17:37:50.912537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.953 [2024-07-12 17:37:50.912545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.953 [2024-07-12 17:37:50.912549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.953 [2024-07-12 17:37:50.912560] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:11.953 [2024-07-12 17:37:50.912569] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:11.953 [2024-07-12 17:37:50.912578] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:11.953 [2024-07-12 17:37:50.912596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:11.953 [2024-07-12 17:37:50.912607] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.912625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.953 [2024-07-12 17:37:50.912638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.953 [2024-07-12 17:37:50.912764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.953 [2024-07-12 17:37:50.912772] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.953 [2024-07-12 17:37:50.912777] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912782] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xedbfd0): datao=0, datal=4096, cccid=0 00:27:11.953 [2024-07-12 17:37:50.912788] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf49180) on tqpair(0xedbfd0): expected_datao=0, payload_size=4096 00:27:11.953 [2024-07-12 17:37:50.912804] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912810] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.953 [2024-07-12 17:37:50.912866] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.953 [2024-07-12 17:37:50.912871] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.953 [2024-07-12 17:37:50.912884] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:11.953 [2024-07-12 17:37:50.912891] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:11.953 [2024-07-12 17:37:50.912896] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:11.953 [2024-07-12 17:37:50.912902] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:11.953 [2024-07-12 17:37:50.912908] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:11.953 [2024-07-12 17:37:50.912915] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:11.953 [2024-07-12 17:37:50.912929] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:11.953 [2024-07-12 17:37:50.912939] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.912948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.912957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:11.953 [2024-07-12 17:37:50.912971] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.953 [2024-07-12 17:37:50.913076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.953 [2024-07-12 17:37:50.913085] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.953 [2024-07-12 17:37:50.913089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49180) on tqpair=0xedbfd0 00:27:11.953 [2024-07-12 17:37:50.913103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.913120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.953 [2024-07-12 17:37:50.913128] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913132] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.913144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.953 [2024-07-12 17:37:50.913151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.913168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.953 [2024-07-12 17:37:50.913175] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913180] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.913191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.953 [2024-07-12 17:37:50.913197] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:11.953 [2024-07-12 17:37:50.913211] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:11.953 [2024-07-12 17:37:50.913219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913224] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.953 [2024-07-12 17:37:50.913228] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xedbfd0) 00:27:11.953 [2024-07-12 17:37:50.913237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.953 [2024-07-12 17:37:50.913252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49180, cid 0, qid 0 00:27:11.953 [2024-07-12 17:37:50.913267] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf492e0, cid 1, qid 0 00:27:11.953 [2024-07-12 17:37:50.913273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49440, cid 2, qid 0 00:27:11.953 [2024-07-12 17:37:50.913279] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:11.953 [2024-07-12 17:37:50.913285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49700, cid 4, qid 0 00:27:11.953 [2024-07-12 17:37:50.913430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.954 [2024-07-12 17:37:50.913438] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.954 [2024-07-12 17:37:50.913442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.954 [2024-07-12 17:37:50.913450] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49700) on tqpair=0xedbfd0 00:27:11.954 [2024-07-12 17:37:50.913457] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:11.954 [2024-07-12 17:37:50.913463] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:11.954 [2024-07-12 17:37:50.913475] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.954 [2024-07-12 17:37:50.913481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.954 [2024-07-12 17:37:50.913485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xedbfd0) 00:27:11.954 [2024-07-12 17:37:50.913493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.954 [2024-07-12 17:37:50.913507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49700, cid 4, qid 0 00:27:11.954 [2024-07-12 17:37:50.913605] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.954 [2024-07-12 17:37:50.913614] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.954 [2024-07-12 17:37:50.913619] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.954 [2024-07-12 17:37:50.913624] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xedbfd0): datao=0, datal=4096, cccid=4 00:27:11.954 [2024-07-12 17:37:50.913629] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf49700) on tqpair(0xedbfd0): expected_datao=0, payload_size=4096 00:27:11.954 [2024-07-12 17:37:50.913653] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.954 [2024-07-12 17:37:50.913658] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.217 [2024-07-12 17:37:50.960280] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.217 [2024-07-12 17:37:50.960284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49700) on tqpair=0xedbfd0 00:27:12.217 [2024-07-12 17:37:50.960305] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:12.217 [2024-07-12 17:37:50.960333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960340] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960344] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xedbfd0) 00:27:12.217 [2024-07-12 17:37:50.960353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.217 [2024-07-12 17:37:50.960362] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xedbfd0) 00:27:12.217 [2024-07-12 17:37:50.960379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.217 [2024-07-12 17:37:50.960400] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49700, cid 4, qid 0 00:27:12.217 [2024-07-12 17:37:50.960406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49860, cid 5, qid 0 00:27:12.217 [2024-07-12 17:37:50.960653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.217 [2024-07-12 17:37:50.960663] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.217 [2024-07-12 17:37:50.960667] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960672] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xedbfd0): datao=0, datal=1024, cccid=4 00:27:12.217 [2024-07-12 17:37:50.960678] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf49700) on tqpair(0xedbfd0): expected_datao=0, payload_size=1024 00:27:12.217 [2024-07-12 17:37:50.960690] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960695] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960703] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.217 [2024-07-12 17:37:50.960710] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.217 [2024-07-12 17:37:50.960714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:50.960719] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49860) on tqpair=0xedbfd0 00:27:12.217 [2024-07-12 17:37:51.001492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.217 [2024-07-12 17:37:51.001508] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.217 [2024-07-12 17:37:51.001512] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001518] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49700) on tqpair=0xedbfd0 00:27:12.217 [2024-07-12 17:37:51.001532] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xedbfd0) 00:27:12.217 [2024-07-12 17:37:51.001552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.217 [2024-07-12 17:37:51.001574] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49700, cid 4, qid 0 00:27:12.217 [2024-07-12 17:37:51.001699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.217 [2024-07-12 17:37:51.001708] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.217 [2024-07-12 17:37:51.001713] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001717] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xedbfd0): datao=0, datal=3072, cccid=4 00:27:12.217 [2024-07-12 17:37:51.001723] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf49700) on tqpair(0xedbfd0): expected_datao=0, payload_size=3072 00:27:12.217 [2024-07-12 17:37:51.001739] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001745] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001783] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.217 [2024-07-12 17:37:51.001792] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.217 [2024-07-12 17:37:51.001796] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001801] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49700) on tqpair=0xedbfd0 00:27:12.217 [2024-07-12 17:37:51.001811] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001816] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xedbfd0) 00:27:12.217 [2024-07-12 17:37:51.001829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.217 [2024-07-12 17:37:51.001848] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49700, cid 4, qid 0 00:27:12.217 [2024-07-12 17:37:51.001951] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.217 [2024-07-12 17:37:51.001960] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.217 [2024-07-12 17:37:51.001964] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001969] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xedbfd0): datao=0, datal=8, cccid=4 00:27:12.217 [2024-07-12 17:37:51.001975] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf49700) on tqpair(0xedbfd0): expected_datao=0, payload_size=8 00:27:12.217 [2024-07-12 17:37:51.001989] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.001994] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.042415] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.217 [2024-07-12 17:37:51.042428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.217 [2024-07-12 17:37:51.042433] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.217 [2024-07-12 17:37:51.042438] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf49700) on tqpair=0xedbfd0 00:27:12.217 ===================================================== 00:27:12.217 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:12.217 ===================================================== 00:27:12.217 Controller Capabilities/Features 00:27:12.217 ================================ 00:27:12.217 Vendor ID: 0000 00:27:12.217 Subsystem Vendor ID: 0000 00:27:12.217 Serial Number: .................... 00:27:12.217 Model Number: ........................................ 00:27:12.217 Firmware Version: 24.01.1 00:27:12.217 Recommended Arb Burst: 0 00:27:12.217 IEEE OUI Identifier: 00 00 00 00:27:12.217 Multi-path I/O 00:27:12.217 May have multiple subsystem ports: No 00:27:12.217 May have multiple controllers: No 00:27:12.217 Associated with SR-IOV VF: No 00:27:12.217 Max Data Transfer Size: 131072 00:27:12.217 Max Number of Namespaces: 0 00:27:12.217 Max Number of I/O Queues: 1024 00:27:12.217 NVMe Specification Version (VS): 1.3 00:27:12.217 NVMe Specification Version (Identify): 1.3 00:27:12.217 Maximum Queue Entries: 128 00:27:12.217 Contiguous Queues Required: Yes 00:27:12.217 Arbitration Mechanisms Supported 00:27:12.217 Weighted Round Robin: Not Supported 00:27:12.217 Vendor Specific: Not Supported 00:27:12.217 Reset Timeout: 15000 ms 00:27:12.217 Doorbell Stride: 4 bytes 00:27:12.217 NVM Subsystem Reset: Not Supported 00:27:12.217 Command Sets Supported 00:27:12.217 NVM Command Set: Supported 00:27:12.217 Boot Partition: Not Supported 00:27:12.217 Memory Page Size Minimum: 4096 bytes 00:27:12.218 Memory Page Size Maximum: 4096 bytes 00:27:12.218 Persistent Memory Region: Not Supported 00:27:12.218 Optional Asynchronous Events Supported 00:27:12.218 Namespace Attribute Notices: Not Supported 00:27:12.218 Firmware Activation Notices: Not Supported 00:27:12.218 ANA Change Notices: Not Supported 00:27:12.218 PLE Aggregate Log Change Notices: Not Supported 00:27:12.218 LBA Status Info Alert Notices: Not Supported 00:27:12.218 EGE Aggregate Log Change Notices: Not Supported 00:27:12.218 Normal NVM Subsystem Shutdown event: Not Supported 00:27:12.218 Zone Descriptor Change Notices: Not Supported 00:27:12.218 Discovery Log Change Notices: Supported 00:27:12.218 Controller Attributes 00:27:12.218 128-bit Host Identifier: Not Supported 00:27:12.218 Non-Operational Permissive Mode: Not Supported 00:27:12.218 NVM Sets: Not Supported 00:27:12.218 Read Recovery Levels: Not Supported 00:27:12.218 Endurance Groups: Not Supported 00:27:12.218 Predictable Latency Mode: Not Supported 00:27:12.218 Traffic Based Keep ALive: Not Supported 00:27:12.218 Namespace Granularity: Not Supported 00:27:12.218 SQ Associations: Not Supported 00:27:12.218 UUID List: Not Supported 00:27:12.218 Multi-Domain Subsystem: Not Supported 00:27:12.218 Fixed Capacity Management: Not Supported 00:27:12.218 Variable Capacity Management: Not Supported 00:27:12.218 Delete Endurance Group: Not Supported 00:27:12.218 Delete NVM Set: Not Supported 00:27:12.218 Extended LBA Formats Supported: Not Supported 00:27:12.218 Flexible Data Placement Supported: Not Supported 00:27:12.218 00:27:12.218 Controller Memory Buffer Support 00:27:12.218 ================================ 00:27:12.218 Supported: No 00:27:12.218 00:27:12.218 Persistent Memory Region Support 00:27:12.218 ================================ 00:27:12.218 Supported: No 00:27:12.218 00:27:12.218 Admin Command Set Attributes 00:27:12.218 ============================ 00:27:12.218 Security Send/Receive: Not Supported 00:27:12.218 Format NVM: Not Supported 00:27:12.218 Firmware Activate/Download: Not Supported 00:27:12.218 Namespace Management: Not Supported 00:27:12.218 Device Self-Test: Not Supported 00:27:12.218 Directives: Not Supported 00:27:12.218 NVMe-MI: Not Supported 00:27:12.218 Virtualization Management: Not Supported 00:27:12.218 Doorbell Buffer Config: Not Supported 00:27:12.218 Get LBA Status Capability: Not Supported 00:27:12.218 Command & Feature Lockdown Capability: Not Supported 00:27:12.218 Abort Command Limit: 1 00:27:12.218 Async Event Request Limit: 4 00:27:12.218 Number of Firmware Slots: N/A 00:27:12.218 Firmware Slot 1 Read-Only: N/A 00:27:12.218 Firmware Activation Without Reset: N/A 00:27:12.218 Multiple Update Detection Support: N/A 00:27:12.218 Firmware Update Granularity: No Information Provided 00:27:12.218 Per-Namespace SMART Log: No 00:27:12.218 Asymmetric Namespace Access Log Page: Not Supported 00:27:12.218 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:12.218 Command Effects Log Page: Not Supported 00:27:12.218 Get Log Page Extended Data: Supported 00:27:12.218 Telemetry Log Pages: Not Supported 00:27:12.218 Persistent Event Log Pages: Not Supported 00:27:12.218 Supported Log Pages Log Page: May Support 00:27:12.218 Commands Supported & Effects Log Page: Not Supported 00:27:12.218 Feature Identifiers & Effects Log Page:May Support 00:27:12.218 NVMe-MI Commands & Effects Log Page: May Support 00:27:12.218 Data Area 4 for Telemetry Log: Not Supported 00:27:12.218 Error Log Page Entries Supported: 128 00:27:12.218 Keep Alive: Not Supported 00:27:12.218 00:27:12.218 NVM Command Set Attributes 00:27:12.218 ========================== 00:27:12.218 Submission Queue Entry Size 00:27:12.218 Max: 1 00:27:12.218 Min: 1 00:27:12.218 Completion Queue Entry Size 00:27:12.218 Max: 1 00:27:12.218 Min: 1 00:27:12.218 Number of Namespaces: 0 00:27:12.218 Compare Command: Not Supported 00:27:12.218 Write Uncorrectable Command: Not Supported 00:27:12.218 Dataset Management Command: Not Supported 00:27:12.218 Write Zeroes Command: Not Supported 00:27:12.218 Set Features Save Field: Not Supported 00:27:12.218 Reservations: Not Supported 00:27:12.218 Timestamp: Not Supported 00:27:12.218 Copy: Not Supported 00:27:12.218 Volatile Write Cache: Not Present 00:27:12.218 Atomic Write Unit (Normal): 1 00:27:12.218 Atomic Write Unit (PFail): 1 00:27:12.218 Atomic Compare & Write Unit: 1 00:27:12.218 Fused Compare & Write: Supported 00:27:12.218 Scatter-Gather List 00:27:12.218 SGL Command Set: Supported 00:27:12.218 SGL Keyed: Supported 00:27:12.218 SGL Bit Bucket Descriptor: Not Supported 00:27:12.218 SGL Metadata Pointer: Not Supported 00:27:12.218 Oversized SGL: Not Supported 00:27:12.218 SGL Metadata Address: Not Supported 00:27:12.218 SGL Offset: Supported 00:27:12.218 Transport SGL Data Block: Not Supported 00:27:12.218 Replay Protected Memory Block: Not Supported 00:27:12.218 00:27:12.218 Firmware Slot Information 00:27:12.218 ========================= 00:27:12.218 Active slot: 0 00:27:12.218 00:27:12.218 00:27:12.218 Error Log 00:27:12.218 ========= 00:27:12.218 00:27:12.218 Active Namespaces 00:27:12.218 ================= 00:27:12.218 Discovery Log Page 00:27:12.218 ================== 00:27:12.218 Generation Counter: 2 00:27:12.218 Number of Records: 2 00:27:12.218 Record Format: 0 00:27:12.218 00:27:12.218 Discovery Log Entry 0 00:27:12.218 ---------------------- 00:27:12.218 Transport Type: 3 (TCP) 00:27:12.218 Address Family: 1 (IPv4) 00:27:12.218 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:12.218 Entry Flags: 00:27:12.218 Duplicate Returned Information: 1 00:27:12.218 Explicit Persistent Connection Support for Discovery: 1 00:27:12.218 Transport Requirements: 00:27:12.218 Secure Channel: Not Required 00:27:12.218 Port ID: 0 (0x0000) 00:27:12.218 Controller ID: 65535 (0xffff) 00:27:12.218 Admin Max SQ Size: 128 00:27:12.218 Transport Service Identifier: 4420 00:27:12.218 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:12.218 Transport Address: 10.0.0.2 00:27:12.218 Discovery Log Entry 1 00:27:12.218 ---------------------- 00:27:12.218 Transport Type: 3 (TCP) 00:27:12.218 Address Family: 1 (IPv4) 00:27:12.218 Subsystem Type: 2 (NVM Subsystem) 00:27:12.218 Entry Flags: 00:27:12.218 Duplicate Returned Information: 0 00:27:12.218 Explicit Persistent Connection Support for Discovery: 0 00:27:12.218 Transport Requirements: 00:27:12.218 Secure Channel: Not Required 00:27:12.218 Port ID: 0 (0x0000) 00:27:12.218 Controller ID: 65535 (0xffff) 00:27:12.218 Admin Max SQ Size: 128 00:27:12.218 Transport Service Identifier: 4420 00:27:12.218 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:12.218 Transport Address: 10.0.0.2 [2024-07-12 17:37:51.042543] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:12.218 [2024-07-12 17:37:51.042560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.218 [2024-07-12 17:37:51.042569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.218 [2024-07-12 17:37:51.042576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.218 [2024-07-12 17:37:51.042584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.218 [2024-07-12 17:37:51.042594] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.218 [2024-07-12 17:37:51.042599] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.218 [2024-07-12 17:37:51.042604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.218 [2024-07-12 17:37:51.042613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.218 [2024-07-12 17:37:51.042630] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.218 [2024-07-12 17:37:51.042723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.218 [2024-07-12 17:37:51.042732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.218 [2024-07-12 17:37:51.042737] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.218 [2024-07-12 17:37:51.042742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.218 [2024-07-12 17:37:51.042750] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.218 [2024-07-12 17:37:51.042755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.218 [2024-07-12 17:37:51.042760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.218 [2024-07-12 17:37:51.042769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.218 [2024-07-12 17:37:51.042786] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.218 [2024-07-12 17:37:51.042903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.218 [2024-07-12 17:37:51.042912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.218 [2024-07-12 17:37:51.042916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.218 [2024-07-12 17:37:51.042921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.218 [2024-07-12 17:37:51.042927] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:12.218 [2024-07-12 17:37:51.042933] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:12.218 [2024-07-12 17:37:51.042944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.218 [2024-07-12 17:37:51.042950] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.042954] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.042963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.042979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.043072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.043080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.043085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043090] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.043102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.043120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.043134] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.043239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.043247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.043252] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.043276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043281] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043286] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.043294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.043308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.043407] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.043416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.043420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.043436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043442] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043446] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.043454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.043468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.043548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.043557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.043561] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.043578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.043596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.043609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.043700] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.043708] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.043712] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043717] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.043729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043734] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043739] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.043747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.043760] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.043845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.043853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.043858] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.043874] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043879] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.043884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.043893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.043906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.043987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.043995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.044000] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.044005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.044016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.044021] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.044026] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.044034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.044048] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.044132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.044141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.044145] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.044150] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.044162] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.044167] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.044171] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.044180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.044193] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.048269] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.048281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.048285] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.048290] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.048303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.048309] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.048313] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xedbfd0) 00:27:12.219 [2024-07-12 17:37:51.048322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.219 [2024-07-12 17:37:51.048338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf495a0, cid 3, qid 0 00:27:12.219 [2024-07-12 17:37:51.048490] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.219 [2024-07-12 17:37:51.048498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.219 [2024-07-12 17:37:51.048503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.048507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf495a0) on tqpair=0xedbfd0 00:27:12.219 [2024-07-12 17:37:51.048517] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:27:12.219 00:27:12.219 17:37:51 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:12.219 [2024-07-12 17:37:51.088107] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:12.219 [2024-07-12 17:37:51.088142] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53491 ] 00:27:12.219 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.219 [2024-07-12 17:37:51.124450] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:12.219 [2024-07-12 17:37:51.124497] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:12.219 [2024-07-12 17:37:51.124503] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:12.219 [2024-07-12 17:37:51.124518] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:12.219 [2024-07-12 17:37:51.124526] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:12.219 [2024-07-12 17:37:51.124834] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:12.219 [2024-07-12 17:37:51.124867] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1583fd0 0 00:27:12.219 [2024-07-12 17:37:51.139269] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:12.219 [2024-07-12 17:37:51.139282] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:12.219 [2024-07-12 17:37:51.139287] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:12.219 [2024-07-12 17:37:51.139292] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:12.219 [2024-07-12 17:37:51.139326] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.139333] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.219 [2024-07-12 17:37:51.139338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.219 [2024-07-12 17:37:51.139351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:12.219 [2024-07-12 17:37:51.139373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.219 [2024-07-12 17:37:51.147270] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.147281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.147286] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.220 [2024-07-12 17:37:51.147303] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:12.220 [2024-07-12 17:37:51.147310] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:12.220 [2024-07-12 17:37:51.147317] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:12.220 [2024-07-12 17:37:51.147330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147340] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.220 [2024-07-12 17:37:51.147349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.220 [2024-07-12 17:37:51.147366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.220 [2024-07-12 17:37:51.147547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.147556] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.147561] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147565] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.220 [2024-07-12 17:37:51.147572] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:12.220 [2024-07-12 17:37:51.147582] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:12.220 [2024-07-12 17:37:51.147592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.220 [2024-07-12 17:37:51.147609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.220 [2024-07-12 17:37:51.147624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.220 [2024-07-12 17:37:51.147694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.147703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.147707] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147712] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.220 [2024-07-12 17:37:51.147719] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:12.220 [2024-07-12 17:37:51.147728] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:12.220 [2024-07-12 17:37:51.147737] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147742] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.220 [2024-07-12 17:37:51.147754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.220 [2024-07-12 17:37:51.147771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.220 [2024-07-12 17:37:51.147870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.147878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.147882] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147887] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.220 [2024-07-12 17:37:51.147894] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:12.220 [2024-07-12 17:37:51.147907] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.147916] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.220 [2024-07-12 17:37:51.147925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.220 [2024-07-12 17:37:51.147938] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.220 [2024-07-12 17:37:51.148050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.148059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.148063] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148068] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.220 [2024-07-12 17:37:51.148074] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:12.220 [2024-07-12 17:37:51.148080] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:12.220 [2024-07-12 17:37:51.148089] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:12.220 [2024-07-12 17:37:51.148196] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:12.220 [2024-07-12 17:37:51.148201] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:12.220 [2024-07-12 17:37:51.148210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.220 [2024-07-12 17:37:51.148227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.220 [2024-07-12 17:37:51.148240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.220 [2024-07-12 17:37:51.148330] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.148339] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.148344] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148348] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.220 [2024-07-12 17:37:51.148355] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:12.220 [2024-07-12 17:37:51.148367] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148372] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148377] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.220 [2024-07-12 17:37:51.148385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.220 [2024-07-12 17:37:51.148402] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.220 [2024-07-12 17:37:51.148478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.148486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.148491] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.220 [2024-07-12 17:37:51.148502] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:12.220 [2024-07-12 17:37:51.148508] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:12.220 [2024-07-12 17:37:51.148518] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:12.220 [2024-07-12 17:37:51.148528] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:12.220 [2024-07-12 17:37:51.148538] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148547] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.220 [2024-07-12 17:37:51.148555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.220 [2024-07-12 17:37:51.148569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.220 [2024-07-12 17:37:51.148680] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.220 [2024-07-12 17:37:51.148689] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.220 [2024-07-12 17:37:51.148693] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148698] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1583fd0): datao=0, datal=4096, cccid=0 00:27:12.220 [2024-07-12 17:37:51.148704] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1180) on tqpair(0x1583fd0): expected_datao=0, payload_size=4096 00:27:12.220 [2024-07-12 17:37:51.148713] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148718] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.148745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.148750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.220 [2024-07-12 17:37:51.148764] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:12.220 [2024-07-12 17:37:51.148769] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:12.220 [2024-07-12 17:37:51.148775] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:12.220 [2024-07-12 17:37:51.148780] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:12.220 [2024-07-12 17:37:51.148786] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:12.220 [2024-07-12 17:37:51.148792] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:12.220 [2024-07-12 17:37:51.148806] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:12.220 [2024-07-12 17:37:51.148817] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148822] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.220 [2024-07-12 17:37:51.148826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.220 [2024-07-12 17:37:51.148835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:12.220 [2024-07-12 17:37:51.148850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.220 [2024-07-12 17:37:51.148936] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.220 [2024-07-12 17:37:51.148944] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.220 [2024-07-12 17:37:51.148949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.148954] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1180) on tqpair=0x1583fd0 00:27:12.221 [2024-07-12 17:37:51.148962] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.148967] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.148972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1583fd0) 00:27:12.221 [2024-07-12 17:37:51.148979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.221 [2024-07-12 17:37:51.148987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.148991] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.148996] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1583fd0) 00:27:12.221 [2024-07-12 17:37:51.149003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.221 [2024-07-12 17:37:51.149010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149020] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1583fd0) 00:27:12.221 [2024-07-12 17:37:51.149027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.221 [2024-07-12 17:37:51.149034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.221 [2024-07-12 17:37:51.149050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.221 [2024-07-12 17:37:51.149056] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149069] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149087] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1583fd0) 00:27:12.221 [2024-07-12 17:37:51.149095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.221 [2024-07-12 17:37:51.149110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1180, cid 0, qid 0 00:27:12.221 [2024-07-12 17:37:51.149117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f12e0, cid 1, qid 0 00:27:12.221 [2024-07-12 17:37:51.149123] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1440, cid 2, qid 0 00:27:12.221 [2024-07-12 17:37:51.149131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.221 [2024-07-12 17:37:51.149137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1700, cid 4, qid 0 00:27:12.221 [2024-07-12 17:37:51.149276] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.221 [2024-07-12 17:37:51.149285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.221 [2024-07-12 17:37:51.149289] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1700) on tqpair=0x1583fd0 00:27:12.221 [2024-07-12 17:37:51.149300] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:12.221 [2024-07-12 17:37:51.149307] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149318] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149329] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149337] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149341] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149346] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1583fd0) 00:27:12.221 [2024-07-12 17:37:51.149354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:12.221 [2024-07-12 17:37:51.149368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1700, cid 4, qid 0 00:27:12.221 [2024-07-12 17:37:51.149444] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.221 [2024-07-12 17:37:51.149452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.221 [2024-07-12 17:37:51.149456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1700) on tqpair=0x1583fd0 00:27:12.221 [2024-07-12 17:37:51.149536] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149561] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1583fd0) 00:27:12.221 [2024-07-12 17:37:51.149574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.221 [2024-07-12 17:37:51.149588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1700, cid 4, qid 0 00:27:12.221 [2024-07-12 17:37:51.149675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.221 [2024-07-12 17:37:51.149683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.221 [2024-07-12 17:37:51.149688] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149692] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1583fd0): datao=0, datal=4096, cccid=4 00:27:12.221 [2024-07-12 17:37:51.149698] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1700) on tqpair(0x1583fd0): expected_datao=0, payload_size=4096 00:27:12.221 [2024-07-12 17:37:51.149735] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149740] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.221 [2024-07-12 17:37:51.149826] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.221 [2024-07-12 17:37:51.149830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1700) on tqpair=0x1583fd0 00:27:12.221 [2024-07-12 17:37:51.149849] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:12.221 [2024-07-12 17:37:51.149865] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149877] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:12.221 [2024-07-12 17:37:51.149886] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.149895] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1583fd0) 00:27:12.221 [2024-07-12 17:37:51.149903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.221 [2024-07-12 17:37:51.149918] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1700, cid 4, qid 0 00:27:12.221 [2024-07-12 17:37:51.149996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.221 [2024-07-12 17:37:51.150004] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.221 [2024-07-12 17:37:51.150008] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.150013] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1583fd0): datao=0, datal=4096, cccid=4 00:27:12.221 [2024-07-12 17:37:51.150018] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1700) on tqpair(0x1583fd0): expected_datao=0, payload_size=4096 00:27:12.221 [2024-07-12 17:37:51.150033] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.150038] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.150117] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.221 [2024-07-12 17:37:51.150126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.221 [2024-07-12 17:37:51.150130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.221 [2024-07-12 17:37:51.150135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1700) on tqpair=0x1583fd0 00:27:12.222 [2024-07-12 17:37:51.150150] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:12.222 [2024-07-12 17:37:51.150161] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:12.222 [2024-07-12 17:37:51.150170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150179] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.150187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.150201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1700, cid 4, qid 0 00:27:12.222 [2024-07-12 17:37:51.150279] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.222 [2024-07-12 17:37:51.150288] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.222 [2024-07-12 17:37:51.150292] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150297] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1583fd0): datao=0, datal=4096, cccid=4 00:27:12.222 [2024-07-12 17:37:51.150305] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1700) on tqpair(0x1583fd0): expected_datao=0, payload_size=4096 00:27:12.222 [2024-07-12 17:37:51.150340] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150345] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.222 [2024-07-12 17:37:51.150434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.222 [2024-07-12 17:37:51.150438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150443] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1700) on tqpair=0x1583fd0 00:27:12.222 [2024-07-12 17:37:51.150453] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:12.222 [2024-07-12 17:37:51.150462] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:12.222 [2024-07-12 17:37:51.150472] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:12.222 [2024-07-12 17:37:51.150480] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:12.222 [2024-07-12 17:37:51.150486] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:12.222 [2024-07-12 17:37:51.150492] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:12.222 [2024-07-12 17:37:51.150498] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:12.222 [2024-07-12 17:37:51.150504] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:12.222 [2024-07-12 17:37:51.150520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150529] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.150538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.150546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.150563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.222 [2024-07-12 17:37:51.150581] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1700, cid 4, qid 0 00:27:12.222 [2024-07-12 17:37:51.150587] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1860, cid 5, qid 0 00:27:12.222 [2024-07-12 17:37:51.150673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.222 [2024-07-12 17:37:51.150681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.222 [2024-07-12 17:37:51.150685] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1700) on tqpair=0x1583fd0 00:27:12.222 [2024-07-12 17:37:51.150699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.222 [2024-07-12 17:37:51.150706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.222 [2024-07-12 17:37:51.150711] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150715] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1860) on tqpair=0x1583fd0 00:27:12.222 [2024-07-12 17:37:51.150730] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.150748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.150762] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1860, cid 5, qid 0 00:27:12.222 [2024-07-12 17:37:51.150873] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.222 [2024-07-12 17:37:51.150881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.222 [2024-07-12 17:37:51.150885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1860) on tqpair=0x1583fd0 00:27:12.222 [2024-07-12 17:37:51.150902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.150912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.150920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.150934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1860, cid 5, qid 0 00:27:12.222 [2024-07-12 17:37:51.151027] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.222 [2024-07-12 17:37:51.151035] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.222 [2024-07-12 17:37:51.151039] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.151044] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1860) on tqpair=0x1583fd0 00:27:12.222 [2024-07-12 17:37:51.151056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.151062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.151066] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.151074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.151087] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1860, cid 5, qid 0 00:27:12.222 [2024-07-12 17:37:51.151170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.222 [2024-07-12 17:37:51.151178] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.222 [2024-07-12 17:37:51.151182] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.151187] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1860) on tqpair=0x1583fd0 00:27:12.222 [2024-07-12 17:37:51.151201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.151206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.151211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.151219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.151228] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.151233] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.151237] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.151244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.155262] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.155282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.155291] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155296] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155300] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1583fd0) 00:27:12.222 [2024-07-12 17:37:51.155308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.222 [2024-07-12 17:37:51.155326] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1860, cid 5, qid 0 00:27:12.222 [2024-07-12 17:37:51.155332] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1700, cid 4, qid 0 00:27:12.222 [2024-07-12 17:37:51.155338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f19c0, cid 6, qid 0 00:27:12.222 [2024-07-12 17:37:51.155344] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1b20, cid 7, qid 0 00:27:12.222 [2024-07-12 17:37:51.155561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.222 [2024-07-12 17:37:51.155570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.222 [2024-07-12 17:37:51.155574] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155579] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1583fd0): datao=0, datal=8192, cccid=5 00:27:12.222 [2024-07-12 17:37:51.155585] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1860) on tqpair(0x1583fd0): expected_datao=0, payload_size=8192 00:27:12.222 [2024-07-12 17:37:51.155594] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155599] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.222 [2024-07-12 17:37:51.155614] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.222 [2024-07-12 17:37:51.155618] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155623] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1583fd0): datao=0, datal=512, cccid=4 00:27:12.222 [2024-07-12 17:37:51.155629] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1700) on tqpair(0x1583fd0): expected_datao=0, payload_size=512 00:27:12.222 [2024-07-12 17:37:51.155637] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155642] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.222 [2024-07-12 17:37:51.155656] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.222 [2024-07-12 17:37:51.155661] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.222 [2024-07-12 17:37:51.155665] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1583fd0): datao=0, datal=512, cccid=6 00:27:12.223 [2024-07-12 17:37:51.155671] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f19c0) on tqpair(0x1583fd0): expected_datao=0, payload_size=512 00:27:12.223 [2024-07-12 17:37:51.155679] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.223 [2024-07-12 17:37:51.155684] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.223 [2024-07-12 17:37:51.155691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.223 [2024-07-12 17:37:51.155698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.223 [2024-07-12 17:37:51.155702] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.223 [2024-07-12 17:37:51.155709] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1583fd0): datao=0, datal=4096, cccid=7 00:27:12.223 [2024-07-12 17:37:51.155715] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1b20) on tqpair(0x1583fd0): expected_datao=0, payload_size=4096 00:27:12.223 [2024-07-12 17:37:51.155732] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.223 [2024-07-12 17:37:51.155737] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.483 [2024-07-12 17:37:51.196418] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.483 [2024-07-12 17:37:51.196432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.483 [2024-07-12 17:37:51.196437] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.483 [2024-07-12 17:37:51.196442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1860) on tqpair=0x1583fd0 00:27:12.483 [2024-07-12 17:37:51.196459] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.483 [2024-07-12 17:37:51.196467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.483 [2024-07-12 17:37:51.196471] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.483 [2024-07-12 17:37:51.196476] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1700) on tqpair=0x1583fd0 00:27:12.483 [2024-07-12 17:37:51.196487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.483 [2024-07-12 17:37:51.196495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.483 [2024-07-12 17:37:51.196499] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.483 [2024-07-12 17:37:51.196504] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f19c0) on tqpair=0x1583fd0 00:27:12.483 [2024-07-12 17:37:51.196513] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.483 [2024-07-12 17:37:51.196521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.483 [2024-07-12 17:37:51.196525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.483 [2024-07-12 17:37:51.196530] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1b20) on tqpair=0x1583fd0 00:27:12.483 ===================================================== 00:27:12.483 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.483 ===================================================== 00:27:12.483 Controller Capabilities/Features 00:27:12.483 ================================ 00:27:12.483 Vendor ID: 8086 00:27:12.483 Subsystem Vendor ID: 8086 00:27:12.483 Serial Number: SPDK00000000000001 00:27:12.483 Model Number: SPDK bdev Controller 00:27:12.483 Firmware Version: 24.01.1 00:27:12.483 Recommended Arb Burst: 6 00:27:12.483 IEEE OUI Identifier: e4 d2 5c 00:27:12.483 Multi-path I/O 00:27:12.483 May have multiple subsystem ports: Yes 00:27:12.483 May have multiple controllers: Yes 00:27:12.483 Associated with SR-IOV VF: No 00:27:12.483 Max Data Transfer Size: 131072 00:27:12.483 Max Number of Namespaces: 32 00:27:12.483 Max Number of I/O Queues: 127 00:27:12.483 NVMe Specification Version (VS): 1.3 00:27:12.483 NVMe Specification Version (Identify): 1.3 00:27:12.483 Maximum Queue Entries: 128 00:27:12.483 Contiguous Queues Required: Yes 00:27:12.483 Arbitration Mechanisms Supported 00:27:12.483 Weighted Round Robin: Not Supported 00:27:12.483 Vendor Specific: Not Supported 00:27:12.483 Reset Timeout: 15000 ms 00:27:12.483 Doorbell Stride: 4 bytes 00:27:12.483 NVM Subsystem Reset: Not Supported 00:27:12.483 Command Sets Supported 00:27:12.483 NVM Command Set: Supported 00:27:12.483 Boot Partition: Not Supported 00:27:12.483 Memory Page Size Minimum: 4096 bytes 00:27:12.483 Memory Page Size Maximum: 4096 bytes 00:27:12.483 Persistent Memory Region: Not Supported 00:27:12.483 Optional Asynchronous Events Supported 00:27:12.483 Namespace Attribute Notices: Supported 00:27:12.483 Firmware Activation Notices: Not Supported 00:27:12.483 ANA Change Notices: Not Supported 00:27:12.483 PLE Aggregate Log Change Notices: Not Supported 00:27:12.483 LBA Status Info Alert Notices: Not Supported 00:27:12.483 EGE Aggregate Log Change Notices: Not Supported 00:27:12.483 Normal NVM Subsystem Shutdown event: Not Supported 00:27:12.483 Zone Descriptor Change Notices: Not Supported 00:27:12.483 Discovery Log Change Notices: Not Supported 00:27:12.483 Controller Attributes 00:27:12.483 128-bit Host Identifier: Supported 00:27:12.483 Non-Operational Permissive Mode: Not Supported 00:27:12.483 NVM Sets: Not Supported 00:27:12.483 Read Recovery Levels: Not Supported 00:27:12.483 Endurance Groups: Not Supported 00:27:12.483 Predictable Latency Mode: Not Supported 00:27:12.483 Traffic Based Keep ALive: Not Supported 00:27:12.483 Namespace Granularity: Not Supported 00:27:12.483 SQ Associations: Not Supported 00:27:12.483 UUID List: Not Supported 00:27:12.483 Multi-Domain Subsystem: Not Supported 00:27:12.483 Fixed Capacity Management: Not Supported 00:27:12.483 Variable Capacity Management: Not Supported 00:27:12.483 Delete Endurance Group: Not Supported 00:27:12.483 Delete NVM Set: Not Supported 00:27:12.483 Extended LBA Formats Supported: Not Supported 00:27:12.483 Flexible Data Placement Supported: Not Supported 00:27:12.483 00:27:12.483 Controller Memory Buffer Support 00:27:12.483 ================================ 00:27:12.483 Supported: No 00:27:12.483 00:27:12.483 Persistent Memory Region Support 00:27:12.483 ================================ 00:27:12.483 Supported: No 00:27:12.483 00:27:12.483 Admin Command Set Attributes 00:27:12.483 ============================ 00:27:12.483 Security Send/Receive: Not Supported 00:27:12.483 Format NVM: Not Supported 00:27:12.483 Firmware Activate/Download: Not Supported 00:27:12.483 Namespace Management: Not Supported 00:27:12.483 Device Self-Test: Not Supported 00:27:12.483 Directives: Not Supported 00:27:12.483 NVMe-MI: Not Supported 00:27:12.483 Virtualization Management: Not Supported 00:27:12.483 Doorbell Buffer Config: Not Supported 00:27:12.483 Get LBA Status Capability: Not Supported 00:27:12.483 Command & Feature Lockdown Capability: Not Supported 00:27:12.483 Abort Command Limit: 4 00:27:12.483 Async Event Request Limit: 4 00:27:12.483 Number of Firmware Slots: N/A 00:27:12.483 Firmware Slot 1 Read-Only: N/A 00:27:12.483 Firmware Activation Without Reset: N/A 00:27:12.483 Multiple Update Detection Support: N/A 00:27:12.483 Firmware Update Granularity: No Information Provided 00:27:12.483 Per-Namespace SMART Log: No 00:27:12.483 Asymmetric Namespace Access Log Page: Not Supported 00:27:12.483 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:12.483 Command Effects Log Page: Supported 00:27:12.483 Get Log Page Extended Data: Supported 00:27:12.483 Telemetry Log Pages: Not Supported 00:27:12.483 Persistent Event Log Pages: Not Supported 00:27:12.483 Supported Log Pages Log Page: May Support 00:27:12.483 Commands Supported & Effects Log Page: Not Supported 00:27:12.483 Feature Identifiers & Effects Log Page:May Support 00:27:12.483 NVMe-MI Commands & Effects Log Page: May Support 00:27:12.483 Data Area 4 for Telemetry Log: Not Supported 00:27:12.483 Error Log Page Entries Supported: 128 00:27:12.483 Keep Alive: Supported 00:27:12.484 Keep Alive Granularity: 10000 ms 00:27:12.484 00:27:12.484 NVM Command Set Attributes 00:27:12.484 ========================== 00:27:12.484 Submission Queue Entry Size 00:27:12.484 Max: 64 00:27:12.484 Min: 64 00:27:12.484 Completion Queue Entry Size 00:27:12.484 Max: 16 00:27:12.484 Min: 16 00:27:12.484 Number of Namespaces: 32 00:27:12.484 Compare Command: Supported 00:27:12.484 Write Uncorrectable Command: Not Supported 00:27:12.484 Dataset Management Command: Supported 00:27:12.484 Write Zeroes Command: Supported 00:27:12.484 Set Features Save Field: Not Supported 00:27:12.484 Reservations: Supported 00:27:12.484 Timestamp: Not Supported 00:27:12.484 Copy: Supported 00:27:12.484 Volatile Write Cache: Present 00:27:12.484 Atomic Write Unit (Normal): 1 00:27:12.484 Atomic Write Unit (PFail): 1 00:27:12.484 Atomic Compare & Write Unit: 1 00:27:12.484 Fused Compare & Write: Supported 00:27:12.484 Scatter-Gather List 00:27:12.484 SGL Command Set: Supported 00:27:12.484 SGL Keyed: Supported 00:27:12.484 SGL Bit Bucket Descriptor: Not Supported 00:27:12.484 SGL Metadata Pointer: Not Supported 00:27:12.484 Oversized SGL: Not Supported 00:27:12.484 SGL Metadata Address: Not Supported 00:27:12.484 SGL Offset: Supported 00:27:12.484 Transport SGL Data Block: Not Supported 00:27:12.484 Replay Protected Memory Block: Not Supported 00:27:12.484 00:27:12.484 Firmware Slot Information 00:27:12.484 ========================= 00:27:12.484 Active slot: 1 00:27:12.484 Slot 1 Firmware Revision: 24.01.1 00:27:12.484 00:27:12.484 00:27:12.484 Commands Supported and Effects 00:27:12.484 ============================== 00:27:12.484 Admin Commands 00:27:12.484 -------------- 00:27:12.484 Get Log Page (02h): Supported 00:27:12.484 Identify (06h): Supported 00:27:12.484 Abort (08h): Supported 00:27:12.484 Set Features (09h): Supported 00:27:12.484 Get Features (0Ah): Supported 00:27:12.484 Asynchronous Event Request (0Ch): Supported 00:27:12.484 Keep Alive (18h): Supported 00:27:12.484 I/O Commands 00:27:12.484 ------------ 00:27:12.484 Flush (00h): Supported LBA-Change 00:27:12.484 Write (01h): Supported LBA-Change 00:27:12.484 Read (02h): Supported 00:27:12.484 Compare (05h): Supported 00:27:12.484 Write Zeroes (08h): Supported LBA-Change 00:27:12.484 Dataset Management (09h): Supported LBA-Change 00:27:12.484 Copy (19h): Supported LBA-Change 00:27:12.484 Unknown (79h): Supported LBA-Change 00:27:12.484 Unknown (7Ah): Supported 00:27:12.484 00:27:12.484 Error Log 00:27:12.484 ========= 00:27:12.484 00:27:12.484 Arbitration 00:27:12.484 =========== 00:27:12.484 Arbitration Burst: 1 00:27:12.484 00:27:12.484 Power Management 00:27:12.484 ================ 00:27:12.484 Number of Power States: 1 00:27:12.484 Current Power State: Power State #0 00:27:12.484 Power State #0: 00:27:12.484 Max Power: 0.00 W 00:27:12.484 Non-Operational State: Operational 00:27:12.484 Entry Latency: Not Reported 00:27:12.484 Exit Latency: Not Reported 00:27:12.484 Relative Read Throughput: 0 00:27:12.484 Relative Read Latency: 0 00:27:12.484 Relative Write Throughput: 0 00:27:12.484 Relative Write Latency: 0 00:27:12.484 Idle Power: Not Reported 00:27:12.484 Active Power: Not Reported 00:27:12.484 Non-Operational Permissive Mode: Not Supported 00:27:12.484 00:27:12.484 Health Information 00:27:12.484 ================== 00:27:12.484 Critical Warnings: 00:27:12.484 Available Spare Space: OK 00:27:12.484 Temperature: OK 00:27:12.484 Device Reliability: OK 00:27:12.484 Read Only: No 00:27:12.484 Volatile Memory Backup: OK 00:27:12.484 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:12.484 Temperature Threshold: [2024-07-12 17:37:51.196654] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.196661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.196665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.196675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.196693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1b20, cid 7, qid 0 00:27:12.484 [2024-07-12 17:37:51.196765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.196774] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.196778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.196782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1b20) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.196817] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:12.484 [2024-07-12 17:37:51.196832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.484 [2024-07-12 17:37:51.196841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.484 [2024-07-12 17:37:51.196848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.484 [2024-07-12 17:37:51.196856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.484 [2024-07-12 17:37:51.196865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.196871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.196877] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.196886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.196902] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.196970] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.196979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.196983] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.196988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.196997] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197002] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197006] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.197015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.197032] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.197134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.197141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.197146] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197150] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.197157] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:12.484 [2024-07-12 17:37:51.197163] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:12.484 [2024-07-12 17:37:51.197175] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197180] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.197193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.197206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.197280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.197289] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.197293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197298] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.197311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.197329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.197342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.197408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.197416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.197420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.197440] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.197458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.197471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.197543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.197551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.197555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.197572] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197577] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.197590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.197603] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.197673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.197682] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.197686] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.197703] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197713] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.197721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.197734] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.197804] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.197812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.197816] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197821] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.197833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.197851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.197864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.197930] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.197938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.197943] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197948] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.197960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197967] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.197972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.197980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.197993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.198060] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.198068] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.198072] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.198077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.198089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.198094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.198099] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.198107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.198121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.198185] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.198194] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.198198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.198203] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.198216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.198221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.198225] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.198234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.198246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.202264] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.484 [2024-07-12 17:37:51.202275] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.484 [2024-07-12 17:37:51.202280] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.202284] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.484 [2024-07-12 17:37:51.202299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.202304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.484 [2024-07-12 17:37:51.202308] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1583fd0) 00:27:12.484 [2024-07-12 17:37:51.202318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-07-12 17:37:51.202333] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f15a0, cid 3, qid 0 00:27:12.484 [2024-07-12 17:37:51.202504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.485 [2024-07-12 17:37:51.202512] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.485 [2024-07-12 17:37:51.202516] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.485 [2024-07-12 17:37:51.202521] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f15a0) on tqpair=0x1583fd0 00:27:12.485 [2024-07-12 17:37:51.202532] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:12.485 0 Kelvin (-273 Celsius) 00:27:12.485 Available Spare: 0% 00:27:12.485 Available Spare Threshold: 0% 00:27:12.485 Life Percentage Used: 0% 00:27:12.485 Data Units Read: 0 00:27:12.485 Data Units Written: 0 00:27:12.485 Host Read Commands: 0 00:27:12.485 Host Write Commands: 0 00:27:12.485 Controller Busy Time: 0 minutes 00:27:12.485 Power Cycles: 0 00:27:12.485 Power On Hours: 0 hours 00:27:12.485 Unsafe Shutdowns: 0 00:27:12.485 Unrecoverable Media Errors: 0 00:27:12.485 Lifetime Error Log Entries: 0 00:27:12.485 Warning Temperature Time: 0 minutes 00:27:12.485 Critical Temperature Time: 0 minutes 00:27:12.485 00:27:12.485 Number of Queues 00:27:12.485 ================ 00:27:12.485 Number of I/O Submission Queues: 127 00:27:12.485 Number of I/O Completion Queues: 127 00:27:12.485 00:27:12.485 Active Namespaces 00:27:12.485 ================= 00:27:12.485 Namespace ID:1 00:27:12.485 Error Recovery Timeout: Unlimited 00:27:12.485 Command Set Identifier: NVM (00h) 00:27:12.485 Deallocate: Supported 00:27:12.485 Deallocated/Unwritten Error: Not Supported 00:27:12.485 Deallocated Read Value: Unknown 00:27:12.485 Deallocate in Write Zeroes: Not Supported 00:27:12.485 Deallocated Guard Field: 0xFFFF 00:27:12.485 Flush: Supported 00:27:12.485 Reservation: Supported 00:27:12.485 Namespace Sharing Capabilities: Multiple Controllers 00:27:12.485 Size (in LBAs): 131072 (0GiB) 00:27:12.485 Capacity (in LBAs): 131072 (0GiB) 00:27:12.485 Utilization (in LBAs): 131072 (0GiB) 00:27:12.485 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:12.485 EUI64: ABCDEF0123456789 00:27:12.485 UUID: fc556e02-f994-4238-835f-c58d0fd1e46e 00:27:12.485 Thin Provisioning: Not Supported 00:27:12.485 Per-NS Atomic Units: Yes 00:27:12.485 Atomic Boundary Size (Normal): 0 00:27:12.485 Atomic Boundary Size (PFail): 0 00:27:12.485 Atomic Boundary Offset: 0 00:27:12.485 Maximum Single Source Range Length: 65535 00:27:12.485 Maximum Copy Length: 65535 00:27:12.485 Maximum Source Range Count: 1 00:27:12.485 NGUID/EUI64 Never Reused: No 00:27:12.485 Namespace Write Protected: No 00:27:12.485 Number of LBA Formats: 1 00:27:12.485 Current LBA Format: LBA Format #00 00:27:12.485 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:12.485 00:27:12.485 17:37:51 -- host/identify.sh@51 -- # sync 00:27:12.485 17:37:51 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:12.485 17:37:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:12.485 17:37:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.485 17:37:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:12.485 17:37:51 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:12.485 17:37:51 -- host/identify.sh@56 -- # nvmftestfini 00:27:12.485 17:37:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:12.485 17:37:51 -- nvmf/common.sh@116 -- # sync 00:27:12.485 17:37:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:12.485 17:37:51 -- nvmf/common.sh@119 -- # set +e 00:27:12.485 17:37:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:12.485 17:37:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:12.485 rmmod nvme_tcp 00:27:12.485 rmmod nvme_fabrics 00:27:12.485 rmmod nvme_keyring 00:27:12.485 17:37:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:12.485 17:37:51 -- nvmf/common.sh@123 -- # set -e 00:27:12.485 17:37:51 -- nvmf/common.sh@124 -- # return 0 00:27:12.485 17:37:51 -- nvmf/common.sh@477 -- # '[' -n 53194 ']' 00:27:12.485 17:37:51 -- nvmf/common.sh@478 -- # killprocess 53194 00:27:12.485 17:37:51 -- common/autotest_common.sh@926 -- # '[' -z 53194 ']' 00:27:12.485 17:37:51 -- common/autotest_common.sh@930 -- # kill -0 53194 00:27:12.485 17:37:51 -- common/autotest_common.sh@931 -- # uname 00:27:12.485 17:37:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:12.485 17:37:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53194 00:27:12.485 17:37:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:12.485 17:37:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:12.485 17:37:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53194' 00:27:12.485 killing process with pid 53194 00:27:12.485 17:37:51 -- common/autotest_common.sh@945 -- # kill 53194 00:27:12.485 [2024-07-12 17:37:51.330523] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:12.485 17:37:51 -- common/autotest_common.sh@950 -- # wait 53194 00:27:12.744 17:37:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:12.744 17:37:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:12.744 17:37:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:12.744 17:37:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.744 17:37:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:12.744 17:37:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.744 17:37:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.744 17:37:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.644 17:37:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:14.644 00:27:14.644 real 0m9.581s 00:27:14.644 user 0m8.147s 00:27:14.644 sys 0m4.697s 00:27:14.644 17:37:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.644 17:37:53 -- common/autotest_common.sh@10 -- # set +x 00:27:14.644 ************************************ 00:27:14.644 END TEST nvmf_identify 00:27:14.644 ************************************ 00:27:14.903 17:37:53 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:14.903 17:37:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:14.903 17:37:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.903 17:37:53 -- common/autotest_common.sh@10 -- # set +x 00:27:14.903 ************************************ 00:27:14.903 START TEST nvmf_perf 00:27:14.903 ************************************ 00:27:14.903 17:37:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:14.903 * Looking for test storage... 00:27:14.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.903 17:37:53 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.903 17:37:53 -- nvmf/common.sh@7 -- # uname -s 00:27:14.903 17:37:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.903 17:37:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.903 17:37:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.903 17:37:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.903 17:37:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.903 17:37:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.903 17:37:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.903 17:37:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.903 17:37:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.903 17:37:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.903 17:37:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:14.903 17:37:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:14.903 17:37:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.903 17:37:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.903 17:37:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.903 17:37:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.903 17:37:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.903 17:37:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.903 17:37:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.903 17:37:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.903 17:37:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.903 17:37:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.903 17:37:53 -- paths/export.sh@5 -- # export PATH 00:27:14.903 17:37:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.903 17:37:53 -- nvmf/common.sh@46 -- # : 0 00:27:14.903 17:37:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:14.903 17:37:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:14.903 17:37:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:14.903 17:37:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.903 17:37:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.903 17:37:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:14.904 17:37:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:14.904 17:37:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:14.904 17:37:53 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:14.904 17:37:53 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:14.904 17:37:53 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:14.904 17:37:53 -- host/perf.sh@17 -- # nvmftestinit 00:27:14.904 17:37:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:14.904 17:37:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.904 17:37:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:14.904 17:37:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:14.904 17:37:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:14.904 17:37:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.904 17:37:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.904 17:37:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.904 17:37:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:14.904 17:37:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:14.904 17:37:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:14.904 17:37:53 -- common/autotest_common.sh@10 -- # set +x 00:27:20.173 17:37:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:20.173 17:37:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:20.174 17:37:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:20.174 17:37:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:20.174 17:37:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:20.174 17:37:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:20.174 17:37:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:20.174 17:37:58 -- nvmf/common.sh@294 -- # net_devs=() 00:27:20.174 17:37:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:20.174 17:37:58 -- nvmf/common.sh@295 -- # e810=() 00:27:20.174 17:37:58 -- nvmf/common.sh@295 -- # local -ga e810 00:27:20.174 17:37:58 -- nvmf/common.sh@296 -- # x722=() 00:27:20.174 17:37:58 -- nvmf/common.sh@296 -- # local -ga x722 00:27:20.174 17:37:58 -- nvmf/common.sh@297 -- # mlx=() 00:27:20.174 17:37:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:20.174 17:37:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.174 17:37:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:20.174 17:37:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:20.174 17:37:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:20.174 17:37:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:20.174 17:37:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:20.174 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:20.174 17:37:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:20.174 17:37:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:20.174 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:20.174 17:37:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:20.174 17:37:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:20.174 17:37:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.174 17:37:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:20.174 17:37:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.174 17:37:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:20.174 Found net devices under 0000:af:00.0: cvl_0_0 00:27:20.174 17:37:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.174 17:37:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:20.174 17:37:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.174 17:37:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:20.174 17:37:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.174 17:37:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:20.174 Found net devices under 0000:af:00.1: cvl_0_1 00:27:20.174 17:37:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.174 17:37:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:20.174 17:37:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:20.174 17:37:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:20.174 17:37:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:20.174 17:37:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.174 17:37:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.174 17:37:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.174 17:37:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:20.174 17:37:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.174 17:37:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.174 17:37:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:20.174 17:37:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.174 17:37:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.174 17:37:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:20.174 17:37:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:20.174 17:37:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.174 17:37:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.174 17:37:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.174 17:37:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.174 17:37:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:20.174 17:37:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.432 17:37:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.432 17:37:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.432 17:37:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:20.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:27:20.432 00:27:20.432 --- 10.0.0.2 ping statistics --- 00:27:20.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.432 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:20.432 17:37:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:27:20.432 00:27:20.432 --- 10.0.0.1 ping statistics --- 00:27:20.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.432 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:27:20.432 17:37:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.432 17:37:59 -- nvmf/common.sh@410 -- # return 0 00:27:20.432 17:37:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:20.432 17:37:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.432 17:37:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:20.432 17:37:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:20.432 17:37:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.433 17:37:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:20.433 17:37:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:20.433 17:37:59 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:20.433 17:37:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:20.433 17:37:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:20.433 17:37:59 -- common/autotest_common.sh@10 -- # set +x 00:27:20.433 17:37:59 -- nvmf/common.sh@469 -- # nvmfpid=56971 00:27:20.433 17:37:59 -- nvmf/common.sh@470 -- # waitforlisten 56971 00:27:20.433 17:37:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:20.433 17:37:59 -- common/autotest_common.sh@819 -- # '[' -z 56971 ']' 00:27:20.433 17:37:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.433 17:37:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:20.433 17:37:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.433 17:37:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:20.433 17:37:59 -- common/autotest_common.sh@10 -- # set +x 00:27:20.433 [2024-07-12 17:37:59.314091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:20.433 [2024-07-12 17:37:59.314147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.433 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.692 [2024-07-12 17:37:59.403523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:20.692 [2024-07-12 17:37:59.446539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:20.692 [2024-07-12 17:37:59.446691] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.692 [2024-07-12 17:37:59.446704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.692 [2024-07-12 17:37:59.446713] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.692 [2024-07-12 17:37:59.446770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.692 [2024-07-12 17:37:59.446875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:20.692 [2024-07-12 17:37:59.446968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:20.692 [2024-07-12 17:37:59.446970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.636 17:38:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:21.636 17:38:00 -- common/autotest_common.sh@852 -- # return 0 00:27:21.636 17:38:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:21.636 17:38:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:21.636 17:38:00 -- common/autotest_common.sh@10 -- # set +x 00:27:21.636 17:38:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.636 17:38:00 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:21.636 17:38:00 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:24.919 17:38:03 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:24.919 17:38:03 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:24.919 17:38:03 -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:27:24.919 17:38:03 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:25.177 17:38:03 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:25.177 17:38:03 -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:27:25.177 17:38:03 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:25.177 17:38:03 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:25.177 17:38:03 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:25.177 [2024-07-12 17:38:04.135743] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.435 17:38:04 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.693 17:38:04 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:25.693 17:38:04 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.951 17:38:04 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:25.951 17:38:04 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:25.951 17:38:04 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.210 [2024-07-12 17:38:05.139920] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.210 17:38:05 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:26.469 17:38:05 -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:27:26.469 17:38:05 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:27:26.469 17:38:05 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:26.469 17:38:05 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:27:27.842 Initializing NVMe Controllers 00:27:27.842 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:27:27.842 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:27:27.842 Initialization complete. Launching workers. 00:27:27.842 ======================================================== 00:27:27.842 Latency(us) 00:27:27.842 Device Information : IOPS MiB/s Average min max 00:27:27.842 PCIE (0000:86:00.0) NSID 1 from core 0: 70522.77 275.48 453.06 60.38 5282.40 00:27:27.842 ======================================================== 00:27:27.842 Total : 70522.77 275.48 453.06 60.38 5282.40 00:27:27.842 00:27:27.842 17:38:06 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:27.842 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.221 Initializing NVMe Controllers 00:27:29.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:29.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:29.221 Initialization complete. Launching workers. 00:27:29.221 ======================================================== 00:27:29.221 Latency(us) 00:27:29.221 Device Information : IOPS MiB/s Average min max 00:27:29.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.00 0.37 10774.68 146.45 44793.56 00:27:29.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18667.66 6971.37 47885.11 00:27:29.221 ======================================================== 00:27:29.221 Total : 151.00 0.59 13701.88 146.45 47885.11 00:27:29.221 00:27:29.221 17:38:08 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:29.221 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.597 Initializing NVMe Controllers 00:27:30.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:30.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:30.597 Initialization complete. Launching workers. 00:27:30.597 ======================================================== 00:27:30.597 Latency(us) 00:27:30.597 Device Information : IOPS MiB/s Average min max 00:27:30.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7984.74 31.19 4007.35 591.43 8890.24 00:27:30.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3894.92 15.21 8218.08 6480.90 15774.95 00:27:30.597 ======================================================== 00:27:30.597 Total : 11879.67 46.40 5387.90 591.43 15774.95 00:27:30.597 00:27:30.597 17:38:09 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:30.597 17:38:09 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:30.597 17:38:09 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:30.597 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.126 Initializing NVMe Controllers 00:27:33.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.126 Controller IO queue size 128, less than required. 00:27:33.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.126 Controller IO queue size 128, less than required. 00:27:33.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:33.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:33.126 Initialization complete. Launching workers. 00:27:33.126 ======================================================== 00:27:33.126 Latency(us) 00:27:33.126 Device Information : IOPS MiB/s Average min max 00:27:33.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1489.77 372.44 87346.86 54277.59 148649.22 00:27:33.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 612.70 153.18 214386.14 56042.82 318937.37 00:27:33.126 ======================================================== 00:27:33.126 Total : 2102.47 525.62 124368.53 54277.59 318937.37 00:27:33.126 00:27:33.126 17:38:11 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:33.126 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.126 No valid NVMe controllers or AIO or URING devices found 00:27:33.126 Initializing NVMe Controllers 00:27:33.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.126 Controller IO queue size 128, less than required. 00:27:33.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.126 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:33.126 Controller IO queue size 128, less than required. 00:27:33.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.126 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:33.126 WARNING: Some requested NVMe devices were skipped 00:27:33.126 17:38:12 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:33.126 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.668 Initializing NVMe Controllers 00:27:35.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.668 Controller IO queue size 128, less than required. 00:27:35.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:35.668 Controller IO queue size 128, less than required. 00:27:35.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:35.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:35.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:35.668 Initialization complete. Launching workers. 00:27:35.668 00:27:35.668 ==================== 00:27:35.668 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:35.668 TCP transport: 00:27:35.668 polls: 11198 00:27:35.669 idle_polls: 7964 00:27:35.669 sock_completions: 3234 00:27:35.669 nvme_completions: 5595 00:27:35.669 submitted_requests: 8617 00:27:35.669 queued_requests: 1 00:27:35.669 00:27:35.669 ==================== 00:27:35.669 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:35.669 TCP transport: 00:27:35.669 polls: 11436 00:27:35.669 idle_polls: 7672 00:27:35.669 sock_completions: 3764 00:27:35.669 nvme_completions: 5984 00:27:35.669 submitted_requests: 9234 00:27:35.669 queued_requests: 1 00:27:35.669 ======================================================== 00:27:35.669 Latency(us) 00:27:35.669 Device Information : IOPS MiB/s Average min max 00:27:35.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1460.97 365.24 90030.94 53774.60 135812.72 00:27:35.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1557.87 389.47 83025.91 39206.76 112129.80 00:27:35.669 ======================================================== 00:27:35.669 Total : 3018.84 754.71 86416.00 39206.76 135812.72 00:27:35.669 00:27:35.669 17:38:14 -- host/perf.sh@66 -- # sync 00:27:35.669 17:38:14 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.927 17:38:14 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:35.927 17:38:14 -- host/perf.sh@71 -- # '[' -n 0000:86:00.0 ']' 00:27:35.927 17:38:14 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:39.208 17:38:18 -- host/perf.sh@72 -- # ls_guid=ae4ab143-4491-4e4b-86f2-af8896ec6c0f 00:27:39.208 17:38:18 -- host/perf.sh@73 -- # get_lvs_free_mb ae4ab143-4491-4e4b-86f2-af8896ec6c0f 00:27:39.208 17:38:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ae4ab143-4491-4e4b-86f2-af8896ec6c0f 00:27:39.467 17:38:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:39.467 17:38:18 -- common/autotest_common.sh@1345 -- # local fc 00:27:39.467 17:38:18 -- common/autotest_common.sh@1346 -- # local cs 00:27:39.467 17:38:18 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:39.467 17:38:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:39.467 { 00:27:39.467 "uuid": "ae4ab143-4491-4e4b-86f2-af8896ec6c0f", 00:27:39.467 "name": "lvs_0", 00:27:39.467 "base_bdev": "Nvme0n1", 00:27:39.467 "total_data_clusters": 238234, 00:27:39.467 "free_clusters": 238234, 00:27:39.467 "block_size": 512, 00:27:39.467 "cluster_size": 4194304 00:27:39.467 } 00:27:39.467 ]' 00:27:39.467 17:38:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ae4ab143-4491-4e4b-86f2-af8896ec6c0f") .free_clusters' 00:27:39.726 17:38:18 -- common/autotest_common.sh@1348 -- # fc=238234 00:27:39.726 17:38:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ae4ab143-4491-4e4b-86f2-af8896ec6c0f") .cluster_size' 00:27:39.726 17:38:18 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:39.726 17:38:18 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:27:39.726 17:38:18 -- common/autotest_common.sh@1353 -- # echo 952936 00:27:39.726 952936 00:27:39.726 17:38:18 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:39.726 17:38:18 -- host/perf.sh@78 -- # free_mb=20480 00:27:39.726 17:38:18 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae4ab143-4491-4e4b-86f2-af8896ec6c0f lbd_0 20480 00:27:40.292 17:38:18 -- host/perf.sh@80 -- # lb_guid=2381e8e6-803e-4db3-b3f7-7203fee770e8 00:27:40.292 17:38:18 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2381e8e6-803e-4db3-b3f7-7203fee770e8 lvs_n_0 00:27:40.871 17:38:19 -- host/perf.sh@83 -- # ls_nested_guid=339a00ff-8cb4-4a28-a3b1-d2a0847051b8 00:27:40.871 17:38:19 -- host/perf.sh@84 -- # get_lvs_free_mb 339a00ff-8cb4-4a28-a3b1-d2a0847051b8 00:27:40.871 17:38:19 -- common/autotest_common.sh@1343 -- # local lvs_uuid=339a00ff-8cb4-4a28-a3b1-d2a0847051b8 00:27:40.871 17:38:19 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:40.871 17:38:19 -- common/autotest_common.sh@1345 -- # local fc 00:27:40.871 17:38:19 -- common/autotest_common.sh@1346 -- # local cs 00:27:40.871 17:38:19 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:41.180 17:38:19 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:41.180 { 00:27:41.180 "uuid": "ae4ab143-4491-4e4b-86f2-af8896ec6c0f", 00:27:41.180 "name": "lvs_0", 00:27:41.180 "base_bdev": "Nvme0n1", 00:27:41.180 "total_data_clusters": 238234, 00:27:41.180 "free_clusters": 233114, 00:27:41.180 "block_size": 512, 00:27:41.180 "cluster_size": 4194304 00:27:41.180 }, 00:27:41.180 { 00:27:41.180 "uuid": "339a00ff-8cb4-4a28-a3b1-d2a0847051b8", 00:27:41.180 "name": "lvs_n_0", 00:27:41.180 "base_bdev": "2381e8e6-803e-4db3-b3f7-7203fee770e8", 00:27:41.180 "total_data_clusters": 5114, 00:27:41.180 "free_clusters": 5114, 00:27:41.180 "block_size": 512, 00:27:41.180 "cluster_size": 4194304 00:27:41.180 } 00:27:41.180 ]' 00:27:41.180 17:38:19 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="339a00ff-8cb4-4a28-a3b1-d2a0847051b8") .free_clusters' 00:27:41.180 17:38:20 -- common/autotest_common.sh@1348 -- # fc=5114 00:27:41.180 17:38:20 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="339a00ff-8cb4-4a28-a3b1-d2a0847051b8") .cluster_size' 00:27:41.180 17:38:20 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:41.180 17:38:20 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:27:41.180 17:38:20 -- common/autotest_common.sh@1353 -- # echo 20456 00:27:41.180 20456 00:27:41.180 17:38:20 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:41.180 17:38:20 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 339a00ff-8cb4-4a28-a3b1-d2a0847051b8 lbd_nest_0 20456 00:27:41.485 17:38:20 -- host/perf.sh@88 -- # lb_nested_guid=e0a8890b-976c-4014-8c61-eeabe8553881 00:27:41.486 17:38:20 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.744 17:38:20 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:41.744 17:38:20 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e0a8890b-976c-4014-8c61-eeabe8553881 00:27:42.002 17:38:20 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.260 17:38:21 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:42.260 17:38:21 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:42.260 17:38:21 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:42.260 17:38:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:42.260 17:38:21 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.260 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.466 Initializing NVMe Controllers 00:27:54.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:54.466 Initialization complete. Launching workers. 00:27:54.466 ======================================================== 00:27:54.466 Latency(us) 00:27:54.466 Device Information : IOPS MiB/s Average min max 00:27:54.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.20 0.02 20391.25 175.81 45459.48 00:27:54.466 ======================================================== 00:27:54.466 Total : 49.20 0.02 20391.25 175.81 45459.48 00:27:54.466 00:27:54.466 17:38:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:54.466 17:38:31 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:54.466 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.441 Initializing NVMe Controllers 00:28:04.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:04.441 Initialization complete. Launching workers. 00:28:04.441 ======================================================== 00:28:04.441 Latency(us) 00:28:04.441 Device Information : IOPS MiB/s Average min max 00:28:04.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.69 8.46 14796.40 5047.19 50680.23 00:28:04.441 ======================================================== 00:28:04.441 Total : 67.69 8.46 14796.40 5047.19 50680.23 00:28:04.441 00:28:04.441 17:38:41 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:04.441 17:38:41 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:04.441 17:38:41 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:04.441 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.490 Initializing NVMe Controllers 00:28:14.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:14.490 Initialization complete. Launching workers. 00:28:14.490 ======================================================== 00:28:14.490 Latency(us) 00:28:14.490 Device Information : IOPS MiB/s Average min max 00:28:14.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7172.50 3.50 4461.93 315.46 12062.59 00:28:14.490 ======================================================== 00:28:14.490 Total : 7172.50 3.50 4461.93 315.46 12062.59 00:28:14.490 00:28:14.490 17:38:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:14.490 17:38:52 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:14.490 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.466 Initializing NVMe Controllers 00:28:24.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.466 Initialization complete. Launching workers. 00:28:24.466 ======================================================== 00:28:24.466 Latency(us) 00:28:24.466 Device Information : IOPS MiB/s Average min max 00:28:24.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3675.00 459.37 8711.65 651.65 21626.31 00:28:24.466 ======================================================== 00:28:24.466 Total : 3675.00 459.37 8711.65 651.65 21626.31 00:28:24.466 00:28:24.466 17:39:02 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:24.466 17:39:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:24.466 17:39:02 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.466 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.441 Initializing NVMe Controllers 00:28:34.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.441 Controller IO queue size 128, less than required. 00:28:34.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:34.441 Initialization complete. Launching workers. 00:28:34.441 ======================================================== 00:28:34.441 Latency(us) 00:28:34.441 Device Information : IOPS MiB/s Average min max 00:28:34.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10560.10 5.16 12126.77 1871.81 30418.38 00:28:34.441 ======================================================== 00:28:34.441 Total : 10560.10 5.16 12126.77 1871.81 30418.38 00:28:34.441 00:28:34.441 17:39:12 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:34.441 17:39:12 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.441 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.561 Initializing NVMe Controllers 00:28:44.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.561 Controller IO queue size 128, less than required. 00:28:44.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:44.561 Initialization complete. Launching workers. 00:28:44.561 ======================================================== 00:28:44.561 Latency(us) 00:28:44.561 Device Information : IOPS MiB/s Average min max 00:28:44.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1214.75 151.84 105826.78 24092.30 207905.87 00:28:44.562 ======================================================== 00:28:44.562 Total : 1214.75 151.84 105826.78 24092.30 207905.87 00:28:44.562 00:28:44.562 17:39:23 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.820 17:39:23 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e0a8890b-976c-4014-8c61-eeabe8553881 00:28:45.386 17:39:24 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:45.644 17:39:24 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2381e8e6-803e-4db3-b3f7-7203fee770e8 00:28:45.903 17:39:24 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:46.161 17:39:25 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:46.161 17:39:25 -- host/perf.sh@114 -- # nvmftestfini 00:28:46.161 17:39:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:46.161 17:39:25 -- nvmf/common.sh@116 -- # sync 00:28:46.161 17:39:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:46.161 17:39:25 -- nvmf/common.sh@119 -- # set +e 00:28:46.161 17:39:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:46.161 17:39:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:46.161 rmmod nvme_tcp 00:28:46.161 rmmod nvme_fabrics 00:28:46.161 rmmod nvme_keyring 00:28:46.161 17:39:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:46.161 17:39:25 -- nvmf/common.sh@123 -- # set -e 00:28:46.161 17:39:25 -- nvmf/common.sh@124 -- # return 0 00:28:46.161 17:39:25 -- nvmf/common.sh@477 -- # '[' -n 56971 ']' 00:28:46.161 17:39:25 -- nvmf/common.sh@478 -- # killprocess 56971 00:28:46.161 17:39:25 -- common/autotest_common.sh@926 -- # '[' -z 56971 ']' 00:28:46.161 17:39:25 -- common/autotest_common.sh@930 -- # kill -0 56971 00:28:46.161 17:39:25 -- common/autotest_common.sh@931 -- # uname 00:28:46.161 17:39:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:46.161 17:39:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56971 00:28:46.419 17:39:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:46.419 17:39:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:46.419 17:39:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56971' 00:28:46.419 killing process with pid 56971 00:28:46.419 17:39:25 -- common/autotest_common.sh@945 -- # kill 56971 00:28:46.419 17:39:25 -- common/autotest_common.sh@950 -- # wait 56971 00:28:47.795 17:39:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:47.795 17:39:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:47.795 17:39:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:47.795 17:39:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:47.795 17:39:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:47.795 17:39:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.795 17:39:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:47.795 17:39:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.329 17:39:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:50.329 00:28:50.329 real 1m35.131s 00:28:50.329 user 5m43.608s 00:28:50.329 sys 0m15.867s 00:28:50.329 17:39:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.329 17:39:28 -- common/autotest_common.sh@10 -- # set +x 00:28:50.329 ************************************ 00:28:50.329 END TEST nvmf_perf 00:28:50.329 ************************************ 00:28:50.329 17:39:28 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:50.329 17:39:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:50.329 17:39:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.329 17:39:28 -- common/autotest_common.sh@10 -- # set +x 00:28:50.329 ************************************ 00:28:50.329 START TEST nvmf_fio_host 00:28:50.329 ************************************ 00:28:50.329 17:39:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:50.329 * Looking for test storage... 00:28:50.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.329 17:39:28 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.329 17:39:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.329 17:39:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.329 17:39:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.329 17:39:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.330 17:39:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.330 17:39:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.330 17:39:28 -- paths/export.sh@5 -- # export PATH 00:28:50.330 17:39:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.330 17:39:28 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.330 17:39:28 -- nvmf/common.sh@7 -- # uname -s 00:28:50.330 17:39:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.330 17:39:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.330 17:39:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.330 17:39:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.330 17:39:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.330 17:39:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.330 17:39:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.330 17:39:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.330 17:39:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.330 17:39:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.330 17:39:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:50.330 17:39:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:50.330 17:39:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.330 17:39:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.330 17:39:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.330 17:39:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.330 17:39:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.330 17:39:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.330 17:39:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.330 17:39:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.330 17:39:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.330 17:39:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.330 17:39:28 -- paths/export.sh@5 -- # export PATH 00:28:50.330 17:39:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.330 17:39:28 -- nvmf/common.sh@46 -- # : 0 00:28:50.330 17:39:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:50.330 17:39:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:50.330 17:39:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:50.330 17:39:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.330 17:39:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.330 17:39:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:50.330 17:39:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:50.330 17:39:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:50.330 17:39:28 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:50.330 17:39:28 -- host/fio.sh@14 -- # nvmftestinit 00:28:50.330 17:39:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:50.330 17:39:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.330 17:39:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:50.330 17:39:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:50.330 17:39:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:50.330 17:39:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.330 17:39:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.330 17:39:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.330 17:39:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:50.330 17:39:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:50.330 17:39:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:50.330 17:39:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.602 17:39:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:55.602 17:39:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:55.602 17:39:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:55.602 17:39:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:55.602 17:39:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:55.602 17:39:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:55.602 17:39:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:55.602 17:39:34 -- nvmf/common.sh@294 -- # net_devs=() 00:28:55.602 17:39:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:55.602 17:39:34 -- nvmf/common.sh@295 -- # e810=() 00:28:55.602 17:39:34 -- nvmf/common.sh@295 -- # local -ga e810 00:28:55.602 17:39:34 -- nvmf/common.sh@296 -- # x722=() 00:28:55.602 17:39:34 -- nvmf/common.sh@296 -- # local -ga x722 00:28:55.602 17:39:34 -- nvmf/common.sh@297 -- # mlx=() 00:28:55.602 17:39:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:55.602 17:39:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.602 17:39:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:55.602 17:39:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:55.602 17:39:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:55.602 17:39:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:55.602 17:39:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:55.602 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:55.602 17:39:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:55.602 17:39:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:55.602 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:55.602 17:39:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.602 17:39:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:55.603 17:39:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:55.603 17:39:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:55.603 17:39:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:55.603 17:39:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:55.603 17:39:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.603 17:39:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:55.603 17:39:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.603 17:39:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:55.603 Found net devices under 0000:af:00.0: cvl_0_0 00:28:55.603 17:39:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.603 17:39:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:55.603 17:39:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.603 17:39:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:55.603 17:39:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.603 17:39:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:55.603 Found net devices under 0000:af:00.1: cvl_0_1 00:28:55.603 17:39:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.603 17:39:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:55.603 17:39:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:55.603 17:39:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:55.603 17:39:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:55.603 17:39:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:55.603 17:39:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.603 17:39:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.603 17:39:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.603 17:39:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:55.603 17:39:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.603 17:39:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.603 17:39:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:55.603 17:39:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.603 17:39:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.603 17:39:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:55.603 17:39:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:55.603 17:39:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.603 17:39:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.603 17:39:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.603 17:39:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.603 17:39:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:55.603 17:39:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.603 17:39:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.603 17:39:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.603 17:39:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:55.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:28:55.603 00:28:55.603 --- 10.0.0.2 ping statistics --- 00:28:55.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.603 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:55.603 17:39:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:28:55.603 00:28:55.603 --- 10.0.0.1 ping statistics --- 00:28:55.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.603 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:55.603 17:39:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.603 17:39:34 -- nvmf/common.sh@410 -- # return 0 00:28:55.603 17:39:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:55.603 17:39:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.603 17:39:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:55.603 17:39:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:55.603 17:39:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.603 17:39:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:55.603 17:39:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:55.863 17:39:34 -- host/fio.sh@16 -- # [[ y != y ]] 00:28:55.863 17:39:34 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:55.863 17:39:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:55.863 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:28:55.863 17:39:34 -- host/fio.sh@24 -- # nvmfpid=76558 00:28:55.863 17:39:34 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.863 17:39:34 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:55.863 17:39:34 -- host/fio.sh@28 -- # waitforlisten 76558 00:28:55.863 17:39:34 -- common/autotest_common.sh@819 -- # '[' -z 76558 ']' 00:28:55.863 17:39:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.863 17:39:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:55.863 17:39:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.863 17:39:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:55.863 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:28:55.863 [2024-07-12 17:39:34.643624] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:55.863 [2024-07-12 17:39:34.643684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.863 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.863 [2024-07-12 17:39:34.732425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.863 [2024-07-12 17:39:34.774823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:55.863 [2024-07-12 17:39:34.774972] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.863 [2024-07-12 17:39:34.774984] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.863 [2024-07-12 17:39:34.774994] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.863 [2024-07-12 17:39:34.775056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.863 [2024-07-12 17:39:34.775158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.863 [2024-07-12 17:39:34.775227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.863 [2024-07-12 17:39:34.775229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.797 17:39:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:56.797 17:39:35 -- common/autotest_common.sh@852 -- # return 0 00:28:56.797 17:39:35 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:57.056 [2024-07-12 17:39:35.797886] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.056 17:39:35 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:57.056 17:39:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:57.056 17:39:35 -- common/autotest_common.sh@10 -- # set +x 00:28:57.056 17:39:35 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:57.315 Malloc1 00:28:57.315 17:39:36 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.573 17:39:36 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:57.832 17:39:36 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.832 [2024-07-12 17:39:36.769086] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.832 17:39:36 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:58.091 17:39:36 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:58.091 17:39:36 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:58.091 17:39:36 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:58.091 17:39:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:58.091 17:39:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:58.091 17:39:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:58.091 17:39:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:58.091 17:39:36 -- common/autotest_common.sh@1320 -- # shift 00:28:58.091 17:39:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:58.092 17:39:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:58.092 17:39:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:58.092 17:39:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:58.092 17:39:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:58.092 17:39:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:58.092 17:39:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:58.092 17:39:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:58.092 17:39:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:58.092 17:39:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:58.092 17:39:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:58.092 17:39:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:58.092 17:39:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:58.092 17:39:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:58.092 17:39:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:58.659 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:58.659 fio-3.35 00:28:58.659 Starting 1 thread 00:28:58.659 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.209 [2024-07-12 17:39:39.785809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59a20 is same with the state(5) to be set 00:29:01.209 00:29:01.209 test: (groupid=0, jobs=1): err= 0: pid=77284: Fri Jul 12 17:39:39 2024 00:29:01.209 read: IOPS=8439, BW=33.0MiB/s (34.6MB/s)(66.2MiB/2007msec) 00:29:01.209 slat (usec): min=2, max=249, avg= 2.56, stdev= 2.71 00:29:01.209 clat (usec): min=3083, max=14513, avg=8356.09, stdev=653.97 00:29:01.209 lat (usec): min=3112, max=14516, avg=8358.65, stdev=653.73 00:29:01.209 clat percentiles (usec): 00:29:01.209 | 1.00th=[ 6783], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7832], 00:29:01.209 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8586], 00:29:01.209 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9372], 00:29:01.209 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[11600], 99.95th=[12387], 00:29:01.209 | 99.99th=[13698] 00:29:01.209 bw ( KiB/s): min=32864, max=34176, per=99.99%, avg=33756.00, stdev=612.35, samples=4 00:29:01.209 iops : min= 8216, max= 8544, avg=8439.00, stdev=153.09, samples=4 00:29:01.209 write: IOPS=8439, BW=33.0MiB/s (34.6MB/s)(66.2MiB/2007msec); 0 zone resets 00:29:01.209 slat (usec): min=2, max=233, avg= 2.66, stdev= 1.94 00:29:01.209 clat (usec): min=2462, max=13510, avg=6755.83, stdev=563.27 00:29:01.209 lat (usec): min=2477, max=13513, avg=6758.49, stdev=563.10 00:29:01.209 clat percentiles (usec): 00:29:01.209 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:29:01.209 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6849], 00:29:01.209 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7570], 00:29:01.209 | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[11600], 99.95th=[12387], 00:29:01.209 | 99.99th=[12780] 00:29:01.209 bw ( KiB/s): min=33472, max=33856, per=99.95%, avg=33744.00, stdev=183.83, samples=4 00:29:01.209 iops : min= 8368, max= 8464, avg=8436.00, stdev=45.96, samples=4 00:29:01.209 lat (msec) : 4=0.12%, 10=99.65%, 20=0.24% 00:29:01.209 cpu : usr=74.58%, sys=23.53%, ctx=38, majf=0, minf=4 00:29:01.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:01.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:01.209 issued rwts: total=16938,16939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:01.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:01.209 00:29:01.209 Run status group 0 (all jobs): 00:29:01.209 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=66.2MiB (69.4MB), run=2007-2007msec 00:29:01.209 WRITE: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=66.2MiB (69.4MB), run=2007-2007msec 00:29:01.209 17:39:39 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:01.209 17:39:39 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:01.209 17:39:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:01.209 17:39:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:01.209 17:39:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:01.209 17:39:39 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:01.209 17:39:39 -- common/autotest_common.sh@1320 -- # shift 00:29:01.209 17:39:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:01.209 17:39:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:01.209 17:39:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:01.209 17:39:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:01.209 17:39:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:01.209 17:39:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:01.209 17:39:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:01.209 17:39:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:01.209 17:39:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:01.209 17:39:39 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:01.209 17:39:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:01.209 17:39:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:01.209 17:39:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:01.209 17:39:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:01.209 17:39:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:01.467 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:01.468 fio-3.35 00:29:01.468 Starting 1 thread 00:29:01.468 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.002 00:29:04.002 test: (groupid=0, jobs=1): err= 0: pid=77784: Fri Jul 12 17:39:42 2024 00:29:04.002 read: IOPS=8287, BW=129MiB/s (136MB/s)(260MiB/2004msec) 00:29:04.002 slat (usec): min=3, max=126, avg= 4.20, stdev= 1.71 00:29:04.002 clat (usec): min=2278, max=55288, avg=9181.98, stdev=4151.63 00:29:04.002 lat (usec): min=2282, max=55292, avg=9186.18, stdev=4151.68 00:29:04.002 clat percentiles (usec): 00:29:04.002 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6980], 00:29:04.002 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9503], 00:29:04.002 | 70.00th=[10159], 80.00th=[10945], 90.00th=[11469], 95.00th=[12518], 00:29:04.002 | 99.00th=[15401], 99.50th=[47449], 99.90th=[54264], 99.95th=[54789], 00:29:04.002 | 99.99th=[55313] 00:29:04.002 bw ( KiB/s): min=53056, max=79840, per=51.26%, avg=67968.00, stdev=13396.42, samples=4 00:29:04.002 iops : min= 3316, max= 4990, avg=4248.00, stdev=837.28, samples=4 00:29:04.002 write: IOPS=5004, BW=78.2MiB/s (82.0MB/s)(139MiB/1777msec); 0 zone resets 00:29:04.002 slat (usec): min=45, max=314, avg=46.99, stdev= 6.16 00:29:04.002 clat (usec): min=2625, max=17553, avg=10706.00, stdev=1911.09 00:29:04.002 lat (usec): min=2671, max=17599, avg=10752.99, stdev=1911.68 00:29:04.002 clat percentiles (usec): 00:29:04.002 | 1.00th=[ 6456], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9110], 00:29:04.002 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[11076], 00:29:04.002 | 70.00th=[11600], 80.00th=[12256], 90.00th=[13173], 95.00th=[13960], 00:29:04.002 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16712], 99.95th=[17171], 00:29:04.002 | 99.99th=[17433] 00:29:04.002 bw ( KiB/s): min=53984, max=83424, per=88.28%, avg=70688.00, stdev=14097.25, samples=4 00:29:04.002 iops : min= 3374, max= 5214, avg=4418.00, stdev=881.08, samples=4 00:29:04.002 lat (msec) : 4=0.44%, 10=57.12%, 20=41.95%, 50=0.27%, 100=0.23% 00:29:04.002 cpu : usr=89.77%, sys=9.64%, ctx=16, majf=0, minf=1 00:29:04.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:04.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:04.002 issued rwts: total=16609,8893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:04.002 00:29:04.002 Run status group 0 (all jobs): 00:29:04.002 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=260MiB (272MB), run=2004-2004msec 00:29:04.002 WRITE: bw=78.2MiB/s (82.0MB/s), 78.2MiB/s-78.2MiB/s (82.0MB/s-82.0MB/s), io=139MiB (146MB), run=1777-1777msec 00:29:04.002 17:39:42 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.002 17:39:42 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:04.002 17:39:42 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:04.002 17:39:42 -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:04.002 17:39:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:04.002 17:39:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:04.002 17:39:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:04.002 17:39:42 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:04.002 17:39:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:04.002 17:39:42 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:04.002 17:39:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:29:04.002 17:39:42 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 -i 10.0.0.2 00:29:07.292 Nvme0n1 00:29:07.292 17:39:45 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:10.578 17:39:48 -- host/fio.sh@53 -- # ls_guid=bb10d4f9-0c8d-4a95-9b9e-7d35bebb0b31 00:29:10.578 17:39:48 -- host/fio.sh@54 -- # get_lvs_free_mb bb10d4f9-0c8d-4a95-9b9e-7d35bebb0b31 00:29:10.578 17:39:48 -- common/autotest_common.sh@1343 -- # local lvs_uuid=bb10d4f9-0c8d-4a95-9b9e-7d35bebb0b31 00:29:10.578 17:39:48 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:10.578 17:39:48 -- common/autotest_common.sh@1345 -- # local fc 00:29:10.578 17:39:48 -- common/autotest_common.sh@1346 -- # local cs 00:29:10.578 17:39:48 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:10.578 17:39:49 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:10.578 { 00:29:10.578 "uuid": "bb10d4f9-0c8d-4a95-9b9e-7d35bebb0b31", 00:29:10.578 "name": "lvs_0", 00:29:10.578 "base_bdev": "Nvme0n1", 00:29:10.578 "total_data_clusters": 930, 00:29:10.578 "free_clusters": 930, 00:29:10.578 "block_size": 512, 00:29:10.578 "cluster_size": 1073741824 00:29:10.578 } 00:29:10.578 ]' 00:29:10.578 17:39:49 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="bb10d4f9-0c8d-4a95-9b9e-7d35bebb0b31") .free_clusters' 00:29:10.578 17:39:49 -- common/autotest_common.sh@1348 -- # fc=930 00:29:10.578 17:39:49 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="bb10d4f9-0c8d-4a95-9b9e-7d35bebb0b31") .cluster_size' 00:29:10.578 17:39:49 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:29:10.578 17:39:49 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:29:10.578 17:39:49 -- common/autotest_common.sh@1353 -- # echo 952320 00:29:10.578 952320 00:29:10.578 17:39:49 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:10.836 1ecb7056-4fc9-41ab-83b7-8ce1e3c1bc58 00:29:10.836 17:39:49 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:11.093 17:39:49 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:11.093 17:39:50 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:11.352 17:39:50 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:11.352 17:39:50 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:11.352 17:39:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:11.352 17:39:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:11.352 17:39:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:11.352 17:39:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.352 17:39:50 -- common/autotest_common.sh@1320 -- # shift 00:29:11.352 17:39:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:11.352 17:39:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:11.352 17:39:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.352 17:39:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:11.352 17:39:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:11.352 17:39:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:11.352 17:39:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:11.352 17:39:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:11.635 17:39:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.635 17:39:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:11.635 17:39:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:11.635 17:39:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:11.635 17:39:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:11.635 17:39:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:11.635 17:39:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:11.895 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:11.895 fio-3.35 00:29:11.895 Starting 1 thread 00:29:11.895 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.414 00:29:14.414 test: (groupid=0, jobs=1): err= 0: pid=79793: Fri Jul 12 17:39:53 2024 00:29:14.414 read: IOPS=5716, BW=22.3MiB/s (23.4MB/s)(44.9MiB/2009msec) 00:29:14.414 slat (usec): min=2, max=123, avg= 2.52, stdev= 1.59 00:29:14.414 clat (usec): min=854, max=171121, avg=12266.17, stdev=11878.52 00:29:14.414 lat (usec): min=857, max=171140, avg=12268.69, stdev=11878.75 00:29:14.414 clat percentiles (msec): 00:29:14.414 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:29:14.414 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:29:14.414 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 14], 00:29:14.414 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:14.414 | 99.99th=[ 171] 00:29:14.414 bw ( KiB/s): min=16120, max=25264, per=99.93%, avg=22850.00, stdev=4490.80, samples=4 00:29:14.414 iops : min= 4030, max= 6316, avg=5712.50, stdev=1122.70, samples=4 00:29:14.414 write: IOPS=5706, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2009msec); 0 zone resets 00:29:14.414 slat (usec): min=2, max=110, avg= 2.61, stdev= 1.09 00:29:14.414 clat (usec): min=306, max=169313, avg=9990.32, stdev=11161.60 00:29:14.414 lat (usec): min=309, max=169319, avg=9992.94, stdev=11161.87 00:29:14.414 clat percentiles (msec): 00:29:14.414 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:29:14.414 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:29:14.414 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:29:14.414 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:29:14.414 | 99.99th=[ 169] 00:29:14.414 bw ( KiB/s): min=17192, max=24768, per=99.86%, avg=22794.00, stdev=3735.52, samples=4 00:29:14.414 iops : min= 4298, max= 6192, avg=5698.50, stdev=933.88, samples=4 00:29:14.414 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:14.414 lat (msec) : 2=0.04%, 4=0.13%, 10=45.71%, 20=53.54%, 250=0.56% 00:29:14.414 cpu : usr=72.86%, sys=25.65%, ctx=64, majf=0, minf=4 00:29:14.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:14.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:14.414 issued rwts: total=11485,11464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.414 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:14.414 00:29:14.414 Run status group 0 (all jobs): 00:29:14.414 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.0MB), run=2009-2009msec 00:29:14.414 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2009-2009msec 00:29:14.414 17:39:53 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:14.414 17:39:53 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:15.791 17:39:54 -- host/fio.sh@64 -- # ls_nested_guid=cb296969-e665-4cfe-919b-52a7eb6d5943 00:29:15.791 17:39:54 -- host/fio.sh@65 -- # get_lvs_free_mb cb296969-e665-4cfe-919b-52a7eb6d5943 00:29:15.791 17:39:54 -- common/autotest_common.sh@1343 -- # local lvs_uuid=cb296969-e665-4cfe-919b-52a7eb6d5943 00:29:15.791 17:39:54 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:15.791 17:39:54 -- common/autotest_common.sh@1345 -- # local fc 00:29:15.791 17:39:54 -- common/autotest_common.sh@1346 -- # local cs 00:29:15.791 17:39:54 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:15.791 17:39:54 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:15.791 { 00:29:15.791 "uuid": "bb10d4f9-0c8d-4a95-9b9e-7d35bebb0b31", 00:29:15.791 "name": "lvs_0", 00:29:15.791 "base_bdev": "Nvme0n1", 00:29:15.791 "total_data_clusters": 930, 00:29:15.791 "free_clusters": 0, 00:29:15.791 "block_size": 512, 00:29:15.791 "cluster_size": 1073741824 00:29:15.791 }, 00:29:15.791 { 00:29:15.791 "uuid": "cb296969-e665-4cfe-919b-52a7eb6d5943", 00:29:15.791 "name": "lvs_n_0", 00:29:15.791 "base_bdev": "1ecb7056-4fc9-41ab-83b7-8ce1e3c1bc58", 00:29:15.791 "total_data_clusters": 237847, 00:29:15.791 "free_clusters": 237847, 00:29:15.791 "block_size": 512, 00:29:15.791 "cluster_size": 4194304 00:29:15.791 } 00:29:15.791 ]' 00:29:15.791 17:39:54 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="cb296969-e665-4cfe-919b-52a7eb6d5943") .free_clusters' 00:29:15.791 17:39:54 -- common/autotest_common.sh@1348 -- # fc=237847 00:29:15.791 17:39:54 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="cb296969-e665-4cfe-919b-52a7eb6d5943") .cluster_size' 00:29:16.048 17:39:54 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:16.048 17:39:54 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:29:16.048 17:39:54 -- common/autotest_common.sh@1353 -- # echo 951388 00:29:16.048 951388 00:29:16.048 17:39:54 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:16.611 03bd21bd-1b31-4dbf-8c6e-32abd6348971 00:29:16.611 17:39:55 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:16.869 17:39:55 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:17.126 17:39:56 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:17.382 17:39:56 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:17.382 17:39:56 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:17.382 17:39:56 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:17.382 17:39:56 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:17.382 17:39:56 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:17.382 17:39:56 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:17.382 17:39:56 -- common/autotest_common.sh@1320 -- # shift 00:29:17.382 17:39:56 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:17.382 17:39:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:17.382 17:39:56 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:17.382 17:39:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:17.382 17:39:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:17.653 17:39:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:17.653 17:39:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:17.653 17:39:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:17.653 17:39:56 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:17.653 17:39:56 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:17.653 17:39:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:17.653 17:39:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:17.653 17:39:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:17.653 17:39:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:17.653 17:39:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:17.912 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:17.912 fio-3.35 00:29:17.912 Starting 1 thread 00:29:17.912 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.433 00:29:20.433 test: (groupid=0, jobs=1): err= 0: pid=80993: Fri Jul 12 17:39:59 2024 00:29:20.433 read: IOPS=5412, BW=21.1MiB/s (22.2MB/s)(42.5MiB/2009msec) 00:29:20.433 slat (usec): min=2, max=125, avg= 2.52, stdev= 1.70 00:29:20.433 clat (usec): min=4461, max=20813, avg=13042.88, stdev=1201.65 00:29:20.433 lat (usec): min=4466, max=20815, avg=13045.41, stdev=1201.55 00:29:20.433 clat percentiles (usec): 00:29:20.433 | 1.00th=[10159], 5.00th=[11076], 10.00th=[11600], 20.00th=[12125], 00:29:20.433 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:29:20.433 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:29:20.433 | 99.00th=[15664], 99.50th=[15926], 99.90th=[19268], 99.95th=[19530], 00:29:20.433 | 99.99th=[20841] 00:29:20.433 bw ( KiB/s): min=20542, max=22144, per=99.80%, avg=21607.50, stdev=747.94, samples=4 00:29:20.433 iops : min= 5135, max= 5536, avg=5401.75, stdev=187.22, samples=4 00:29:20.433 write: IOPS=5395, BW=21.1MiB/s (22.1MB/s)(42.3MiB/2009msec); 0 zone resets 00:29:20.433 slat (usec): min=2, max=108, avg= 2.61, stdev= 1.09 00:29:20.433 clat (usec): min=2026, max=18904, avg=10490.85, stdev=1006.96 00:29:20.433 lat (usec): min=2033, max=18906, avg=10493.46, stdev=1006.90 00:29:20.433 clat percentiles (usec): 00:29:20.433 | 1.00th=[ 8160], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:29:20.433 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:29:20.433 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:29:20.433 | 99.00th=[12649], 99.50th=[13173], 99.90th=[16450], 99.95th=[17957], 00:29:20.433 | 99.99th=[18744] 00:29:20.433 bw ( KiB/s): min=21312, max=21824, per=99.82%, avg=21545.00, stdev=211.77, samples=4 00:29:20.433 iops : min= 5328, max= 5456, avg=5386.25, stdev=52.94, samples=4 00:29:20.433 lat (msec) : 4=0.05%, 10=14.59%, 20=85.35%, 50=0.01% 00:29:20.433 cpu : usr=73.21%, sys=25.60%, ctx=92, majf=0, minf=4 00:29:20.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:20.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:20.433 issued rwts: total=10874,10840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:20.433 00:29:20.433 Run status group 0 (all jobs): 00:29:20.433 READ: bw=21.1MiB/s (22.2MB/s), 21.1MiB/s-21.1MiB/s (22.2MB/s-22.2MB/s), io=42.5MiB (44.5MB), run=2009-2009msec 00:29:20.433 WRITE: bw=21.1MiB/s (22.1MB/s), 21.1MiB/s-21.1MiB/s (22.1MB/s-22.1MB/s), io=42.3MiB (44.4MB), run=2009-2009msec 00:29:20.433 17:39:59 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:20.433 17:39:59 -- host/fio.sh@74 -- # sync 00:29:20.433 17:39:59 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:24.612 17:40:03 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:24.612 17:40:03 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:27.886 17:40:06 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:27.886 17:40:06 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:29.782 17:40:08 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:29.782 17:40:08 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:29.782 17:40:08 -- host/fio.sh@86 -- # nvmftestfini 00:29:29.782 17:40:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:29.782 17:40:08 -- nvmf/common.sh@116 -- # sync 00:29:29.782 17:40:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:29.782 17:40:08 -- nvmf/common.sh@119 -- # set +e 00:29:29.782 17:40:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:29.782 17:40:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:29.782 rmmod nvme_tcp 00:29:29.782 rmmod nvme_fabrics 00:29:29.782 rmmod nvme_keyring 00:29:29.782 17:40:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:29.782 17:40:08 -- nvmf/common.sh@123 -- # set -e 00:29:29.782 17:40:08 -- nvmf/common.sh@124 -- # return 0 00:29:29.782 17:40:08 -- nvmf/common.sh@477 -- # '[' -n 76558 ']' 00:29:29.782 17:40:08 -- nvmf/common.sh@478 -- # killprocess 76558 00:29:29.782 17:40:08 -- common/autotest_common.sh@926 -- # '[' -z 76558 ']' 00:29:29.782 17:40:08 -- common/autotest_common.sh@930 -- # kill -0 76558 00:29:29.782 17:40:08 -- common/autotest_common.sh@931 -- # uname 00:29:29.782 17:40:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:29.782 17:40:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76558 00:29:29.782 17:40:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:29.782 17:40:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:29.782 17:40:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76558' 00:29:29.782 killing process with pid 76558 00:29:29.782 17:40:08 -- common/autotest_common.sh@945 -- # kill 76558 00:29:29.782 17:40:08 -- common/autotest_common.sh@950 -- # wait 76558 00:29:30.040 17:40:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:30.040 17:40:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:30.040 17:40:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:30.040 17:40:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.040 17:40:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:30.040 17:40:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.040 17:40:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.040 17:40:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.573 17:40:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:32.573 00:29:32.573 real 0m42.103s 00:29:32.573 user 3m12.779s 00:29:32.573 sys 0m8.717s 00:29:32.573 17:40:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.573 17:40:10 -- common/autotest_common.sh@10 -- # set +x 00:29:32.573 ************************************ 00:29:32.573 END TEST nvmf_fio_host 00:29:32.573 ************************************ 00:29:32.573 17:40:10 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:32.573 17:40:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:32.573 17:40:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:32.573 17:40:10 -- common/autotest_common.sh@10 -- # set +x 00:29:32.573 ************************************ 00:29:32.573 START TEST nvmf_failover 00:29:32.573 ************************************ 00:29:32.573 17:40:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:32.573 * Looking for test storage... 00:29:32.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.573 17:40:11 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.573 17:40:11 -- nvmf/common.sh@7 -- # uname -s 00:29:32.573 17:40:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.573 17:40:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.573 17:40:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.573 17:40:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.573 17:40:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.573 17:40:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.573 17:40:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.573 17:40:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.573 17:40:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.573 17:40:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.573 17:40:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:32.573 17:40:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:32.573 17:40:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.573 17:40:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.573 17:40:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.573 17:40:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.573 17:40:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.573 17:40:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.573 17:40:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.573 17:40:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.573 17:40:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.573 17:40:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.573 17:40:11 -- paths/export.sh@5 -- # export PATH 00:29:32.573 17:40:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.573 17:40:11 -- nvmf/common.sh@46 -- # : 0 00:29:32.573 17:40:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:32.573 17:40:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:32.573 17:40:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:32.573 17:40:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.573 17:40:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.573 17:40:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:32.573 17:40:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:32.573 17:40:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:32.573 17:40:11 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.573 17:40:11 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.573 17:40:11 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.573 17:40:11 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:32.573 17:40:11 -- host/failover.sh@18 -- # nvmftestinit 00:29:32.574 17:40:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:32.574 17:40:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.574 17:40:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:32.574 17:40:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:32.574 17:40:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:32.574 17:40:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.574 17:40:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.574 17:40:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.574 17:40:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:32.574 17:40:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:32.574 17:40:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:32.574 17:40:11 -- common/autotest_common.sh@10 -- # set +x 00:29:37.891 17:40:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:37.891 17:40:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:37.891 17:40:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:37.891 17:40:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:37.891 17:40:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:37.891 17:40:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:37.891 17:40:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:37.891 17:40:15 -- nvmf/common.sh@294 -- # net_devs=() 00:29:37.891 17:40:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:37.891 17:40:15 -- nvmf/common.sh@295 -- # e810=() 00:29:37.891 17:40:15 -- nvmf/common.sh@295 -- # local -ga e810 00:29:37.891 17:40:15 -- nvmf/common.sh@296 -- # x722=() 00:29:37.891 17:40:15 -- nvmf/common.sh@296 -- # local -ga x722 00:29:37.891 17:40:15 -- nvmf/common.sh@297 -- # mlx=() 00:29:37.891 17:40:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:37.891 17:40:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.891 17:40:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.892 17:40:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.892 17:40:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:37.892 17:40:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:37.892 17:40:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:37.892 17:40:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:37.892 17:40:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:37.892 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:37.892 17:40:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:37.892 17:40:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:37.892 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:37.892 17:40:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:37.892 17:40:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:37.892 17:40:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.892 17:40:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:37.892 17:40:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.892 17:40:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:37.892 Found net devices under 0000:af:00.0: cvl_0_0 00:29:37.892 17:40:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.892 17:40:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:37.892 17:40:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.892 17:40:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:37.892 17:40:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.892 17:40:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:37.892 Found net devices under 0000:af:00.1: cvl_0_1 00:29:37.892 17:40:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.892 17:40:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:37.892 17:40:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:37.892 17:40:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:37.892 17:40:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:37.892 17:40:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.892 17:40:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.892 17:40:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.892 17:40:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:37.892 17:40:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.892 17:40:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.892 17:40:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:37.892 17:40:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.892 17:40:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.892 17:40:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:37.892 17:40:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:37.892 17:40:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.892 17:40:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.892 17:40:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.892 17:40:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.892 17:40:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:37.892 17:40:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.892 17:40:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.892 17:40:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.892 17:40:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:37.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:29:37.892 00:29:37.892 --- 10.0.0.2 ping statistics --- 00:29:37.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.892 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:37.892 17:40:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:29:37.892 00:29:37.892 --- 10.0.0.1 ping statistics --- 00:29:37.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.892 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:29:37.892 17:40:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.892 17:40:16 -- nvmf/common.sh@410 -- # return 0 00:29:37.892 17:40:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:37.892 17:40:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.892 17:40:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:37.892 17:40:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:37.892 17:40:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.892 17:40:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:37.892 17:40:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:37.892 17:40:16 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:37.892 17:40:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:37.892 17:40:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:37.892 17:40:16 -- common/autotest_common.sh@10 -- # set +x 00:29:37.892 17:40:16 -- nvmf/common.sh@469 -- # nvmfpid=86602 00:29:37.892 17:40:16 -- nvmf/common.sh@470 -- # waitforlisten 86602 00:29:37.892 17:40:16 -- common/autotest_common.sh@819 -- # '[' -z 86602 ']' 00:29:37.892 17:40:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.892 17:40:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:37.892 17:40:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.892 17:40:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:37.892 17:40:16 -- common/autotest_common.sh@10 -- # set +x 00:29:37.892 17:40:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:37.892 [2024-07-12 17:40:16.121436] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:37.892 [2024-07-12 17:40:16.121493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.892 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.892 [2024-07-12 17:40:16.197740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:37.892 [2024-07-12 17:40:16.239738] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:37.892 [2024-07-12 17:40:16.239882] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.892 [2024-07-12 17:40:16.239894] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.892 [2024-07-12 17:40:16.239903] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.892 [2024-07-12 17:40:16.239946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.892 [2024-07-12 17:40:16.240034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.892 [2024-07-12 17:40:16.240037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.170 17:40:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:38.170 17:40:17 -- common/autotest_common.sh@852 -- # return 0 00:29:38.170 17:40:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:38.170 17:40:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:38.170 17:40:17 -- common/autotest_common.sh@10 -- # set +x 00:29:38.170 17:40:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.170 17:40:17 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:38.466 [2024-07-12 17:40:17.301068] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.466 17:40:17 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:38.724 Malloc0 00:29:38.724 17:40:17 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.982 17:40:17 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:39.240 17:40:18 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.498 [2024-07-12 17:40:18.325362] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.498 17:40:18 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:39.755 [2024-07-12 17:40:18.509934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:39.756 17:40:18 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:40.014 [2024-07-12 17:40:18.762891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:40.014 17:40:18 -- host/failover.sh@31 -- # bdevperf_pid=87160 00:29:40.014 17:40:18 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:40.014 17:40:18 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:40.014 17:40:18 -- host/failover.sh@34 -- # waitforlisten 87160 /var/tmp/bdevperf.sock 00:29:40.014 17:40:18 -- common/autotest_common.sh@819 -- # '[' -z 87160 ']' 00:29:40.014 17:40:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:40.014 17:40:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:40.014 17:40:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:40.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:40.014 17:40:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:40.014 17:40:18 -- common/autotest_common.sh@10 -- # set +x 00:29:40.948 17:40:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:40.948 17:40:19 -- common/autotest_common.sh@852 -- # return 0 00:29:40.948 17:40:19 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:41.206 NVMe0n1 00:29:41.206 17:40:20 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:41.463 00:29:41.463 17:40:20 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:41.463 17:40:20 -- host/failover.sh@39 -- # run_test_pid=87433 00:29:41.463 17:40:20 -- host/failover.sh@41 -- # sleep 1 00:29:42.835 17:40:21 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.835 [2024-07-12 17:40:21.645491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.835 [2024-07-12 17:40:21.645635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 [2024-07-12 17:40:21.645732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1390 is same with the state(5) to be set 00:29:42.836 17:40:21 -- host/failover.sh@45 -- # sleep 3 00:29:46.112 17:40:24 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:46.112 00:29:46.112 17:40:25 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:46.370 [2024-07-12 17:40:25.264243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.370 [2024-07-12 17:40:25.264406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 [2024-07-12 17:40:25.264762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e28e0 is same with the state(5) to be set 00:29:46.371 17:40:25 -- host/failover.sh@50 -- # sleep 3 00:29:49.648 17:40:28 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.648 [2024-07-12 17:40:28.525645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.648 17:40:28 -- host/failover.sh@55 -- # sleep 1 00:29:51.013 17:40:29 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:51.013 [2024-07-12 17:40:29.788701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.013 [2024-07-12 17:40:29.788747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.013 [2024-07-12 17:40:29.788757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.013 [2024-07-12 17:40:29.788767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.013 [2024-07-12 17:40:29.788776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.013 [2024-07-12 17:40:29.788785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.788993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 [2024-07-12 17:40:29.789145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2fc0 is same with the state(5) to be set 00:29:51.014 17:40:29 -- host/failover.sh@59 -- # wait 87433 00:29:57.573 0 00:29:57.573 17:40:35 -- host/failover.sh@61 -- # killprocess 87160 00:29:57.573 17:40:35 -- common/autotest_common.sh@926 -- # '[' -z 87160 ']' 00:29:57.573 17:40:35 -- common/autotest_common.sh@930 -- # kill -0 87160 00:29:57.573 17:40:35 -- common/autotest_common.sh@931 -- # uname 00:29:57.573 17:40:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:57.573 17:40:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87160 00:29:57.573 17:40:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:57.573 17:40:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:57.573 17:40:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87160' 00:29:57.573 killing process with pid 87160 00:29:57.573 17:40:35 -- common/autotest_common.sh@945 -- # kill 87160 00:29:57.573 17:40:35 -- common/autotest_common.sh@950 -- # wait 87160 00:29:57.573 17:40:35 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:57.573 [2024-07-12 17:40:18.843103] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:57.573 [2024-07-12 17:40:18.843167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87160 ] 00:29:57.573 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.573 [2024-07-12 17:40:18.925027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.573 [2024-07-12 17:40:18.966459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.573 Running I/O for 15 seconds... 00:29:57.573 [2024-07-12 17:40:21.646427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.573 [2024-07-12 17:40:21.646690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.573 [2024-07-12 17:40:21.646699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.646980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.646992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.574 [2024-07-12 17:40:21.647189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.574 [2024-07-12 17:40:21.647374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.574 [2024-07-12 17:40:21.647396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.574 [2024-07-12 17:40:21.647439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.574 [2024-07-12 17:40:21.647450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.647867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.647982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.647994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.648005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.648030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.648055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.648079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.648102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.648124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.648148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.575 [2024-07-12 17:40:21.648170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.648191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.575 [2024-07-12 17:40:21.648203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.575 [2024-07-12 17:40:21.648213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.576 [2024-07-12 17:40:21.648382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.576 [2024-07-12 17:40:21.648436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.576 [2024-07-12 17:40:21.648458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.576 [2024-07-12 17:40:21.648503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.576 [2024-07-12 17:40:21.648524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.576 [2024-07-12 17:40:21.648546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.576 [2024-07-12 17:40:21.648697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.576 [2024-07-12 17:40:21.648938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.576 [2024-07-12 17:40:21.648950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.577 [2024-07-12 17:40:21.648960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.648973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.577 [2024-07-12 17:40:21.648982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.648997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.577 [2024-07-12 17:40:21.649054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.577 [2024-07-12 17:40:21.649120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.577 [2024-07-12 17:40:21.649143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:21.649333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.577 [2024-07-12 17:40:21.649376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.577 [2024-07-12 17:40:21.649386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18896 len:8 PRP1 0x0 PRP2 0x0 00:29:57.577 [2024-07-12 17:40:21.649395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649444] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbbcb00 was disconnected and freed. reset controller. 00:29:57.577 [2024-07-12 17:40:21.649462] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:57.577 [2024-07-12 17:40:21.649489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.577 [2024-07-12 17:40:21.649500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.577 [2024-07-12 17:40:21.649520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.577 [2024-07-12 17:40:21.649541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.577 [2024-07-12 17:40:21.649560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:21.649570] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.577 [2024-07-12 17:40:21.649598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9daf0 (9): Bad file descriptor 00:29:57.577 [2024-07-12 17:40:21.652373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.577 [2024-07-12 17:40:21.761816] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.577 [2024-07-12 17:40:25.264869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:25.264909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:25.264929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:25.264941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:25.264959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:25.264969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:25.264982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:25.264992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:25.265004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:25.265013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:25.265025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:25.265035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:25.265047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.577 [2024-07-12 17:40:25.265056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.577 [2024-07-12 17:40:25.265068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.578 [2024-07-12 17:40:25.265622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.578 [2024-07-12 17:40:25.265634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.265713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.265757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.265778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.265824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.265847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.265978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.265990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.266002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.266025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.266092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.266114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.266158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.266181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.266246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.579 [2024-07-12 17:40:25.266340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.579 [2024-07-12 17:40:25.266352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.579 [2024-07-12 17:40:25.266365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.266942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.266985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.266997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.267007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.267019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.267031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.267043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.580 [2024-07-12 17:40:25.267052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.267064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.267076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.267088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.580 [2024-07-12 17:40:25.267098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.580 [2024-07-12 17:40:25.267110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.581 [2024-07-12 17:40:25.267142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.581 [2024-07-12 17:40:25.267186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.581 [2024-07-12 17:40:25.267391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.581 [2024-07-12 17:40:25.267479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.581 [2024-07-12 17:40:25.267524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.581 [2024-07-12 17:40:25.267544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.581 [2024-07-12 17:40:25.267566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.581 [2024-07-12 17:40:25.267752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbd940 is same with the state(5) to be set 00:29:57.581 [2024-07-12 17:40:25.267777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.581 [2024-07-12 17:40:25.267785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.581 [2024-07-12 17:40:25.267795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121472 len:8 PRP1 0x0 PRP2 0x0 00:29:57.581 [2024-07-12 17:40:25.267805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267853] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbbd940 was disconnected and freed. reset controller. 00:29:57.581 [2024-07-12 17:40:25.267864] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:57.581 [2024-07-12 17:40:25.267891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.581 [2024-07-12 17:40:25.267903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.581 [2024-07-12 17:40:25.267913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.582 [2024-07-12 17:40:25.267922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:25.267932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.582 [2024-07-12 17:40:25.267942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:25.267952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.582 [2024-07-12 17:40:25.267962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:25.267971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.582 [2024-07-12 17:40:25.270738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.582 [2024-07-12 17:40:25.270769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9daf0 (9): Bad file descriptor 00:29:57.582 [2024-07-12 17:40:25.302474] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.582 [2024-07-12 17:40:29.789277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.582 [2024-07-12 17:40:29.789829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.582 [2024-07-12 17:40:29.789895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.582 [2024-07-12 17:40:29.789917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.582 [2024-07-12 17:40:29.789938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.582 [2024-07-12 17:40:29.789950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.789960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.789973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.789982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.789996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.583 [2024-07-12 17:40:29.790672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.583 [2024-07-12 17:40:29.790684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.583 [2024-07-12 17:40:29.790694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.790715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.790737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.790759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.790780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.790977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.790989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.790999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.791020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.791042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.791063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.791085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.791109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.791131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.791153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.584 [2024-07-12 17:40:29.791174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.791196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.791217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.791239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.584 [2024-07-12 17:40:29.791251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-07-12 17:40:29.791267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.585 [2024-07-12 17:40:29.791960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.791981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.791993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-07-12 17:40:29.792003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.585 [2024-07-12 17:40:29.792014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.586 [2024-07-12 17:40:29.792024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.586 [2024-07-12 17:40:29.792046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.586 [2024-07-12 17:40:29.792066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.586 [2024-07-12 17:40:29.792088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.586 [2024-07-12 17:40:29.792109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.586 [2024-07-12 17:40:29.792133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0ab0 is same with the state(5) to be set 00:29:57.586 [2024-07-12 17:40:29.792155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.586 [2024-07-12 17:40:29.792163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.586 [2024-07-12 17:40:29.792175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23896 len:8 PRP1 0x0 PRP2 0x0 00:29:57.586 [2024-07-12 17:40:29.792185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792233] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbc0ab0 was disconnected and freed. reset controller. 00:29:57.586 [2024-07-12 17:40:29.792245] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:57.586 [2024-07-12 17:40:29.792277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.586 [2024-07-12 17:40:29.792289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.586 [2024-07-12 17:40:29.792311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.586 [2024-07-12 17:40:29.792330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.586 [2024-07-12 17:40:29.792350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.586 [2024-07-12 17:40:29.792360] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.586 [2024-07-12 17:40:29.795197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.586 [2024-07-12 17:40:29.795228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9daf0 (9): Bad file descriptor 00:29:57.586 [2024-07-12 17:40:29.830498] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.586 00:29:57.586 Latency(us) 00:29:57.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.586 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:57.586 Verification LBA range: start 0x0 length 0x4000 00:29:57.586 NVMe0n1 : 15.01 11807.39 46.12 550.47 0.00 10337.48 707.49 14954.12 00:29:57.586 =================================================================================================================== 00:29:57.586 Total : 11807.39 46.12 550.47 0.00 10337.48 707.49 14954.12 00:29:57.586 Received shutdown signal, test time was about 15.000000 seconds 00:29:57.586 00:29:57.586 Latency(us) 00:29:57.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.586 =================================================================================================================== 00:29:57.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.586 17:40:35 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:57.586 17:40:35 -- host/failover.sh@65 -- # count=3 00:29:57.586 17:40:35 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:57.586 17:40:35 -- host/failover.sh@73 -- # bdevperf_pid=90104 00:29:57.586 17:40:35 -- host/failover.sh@75 -- # waitforlisten 90104 /var/tmp/bdevperf.sock 00:29:57.586 17:40:35 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:57.586 17:40:35 -- common/autotest_common.sh@819 -- # '[' -z 90104 ']' 00:29:57.586 17:40:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:57.586 17:40:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:57.586 17:40:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:57.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:57.586 17:40:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:57.586 17:40:35 -- common/autotest_common.sh@10 -- # set +x 00:29:57.844 17:40:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:57.844 17:40:36 -- common/autotest_common.sh@852 -- # return 0 00:29:57.844 17:40:36 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:58.101 [2024-07-12 17:40:36.966567] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:58.101 17:40:36 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:58.358 [2024-07-12 17:40:37.219417] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:58.358 17:40:37 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:58.924 NVMe0n1 00:29:58.924 17:40:37 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.182 00:29:59.182 17:40:38 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.747 00:29:59.747 17:40:38 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:59.747 17:40:38 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:59.747 17:40:38 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:00.005 17:40:38 -- host/failover.sh@87 -- # sleep 3 00:30:03.284 17:40:41 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:03.284 17:40:41 -- host/failover.sh@88 -- # grep -q NVMe0 00:30:03.284 17:40:42 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:03.284 17:40:42 -- host/failover.sh@90 -- # run_test_pid=91181 00:30:03.284 17:40:42 -- host/failover.sh@92 -- # wait 91181 00:30:04.656 0 00:30:04.656 17:40:43 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.656 [2024-07-12 17:40:35.879023] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:04.656 [2024-07-12 17:40:35.879087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90104 ] 00:30:04.656 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.657 [2024-07-12 17:40:35.960656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.657 [2024-07-12 17:40:35.998362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.657 [2024-07-12 17:40:38.918701] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:04.657 [2024-07-12 17:40:38.918756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.657 [2024-07-12 17:40:38.918771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.657 [2024-07-12 17:40:38.918782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.657 [2024-07-12 17:40:38.918792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.657 [2024-07-12 17:40:38.918802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.657 [2024-07-12 17:40:38.918811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.657 [2024-07-12 17:40:38.918821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.657 [2024-07-12 17:40:38.918831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.657 [2024-07-12 17:40:38.918841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:04.657 [2024-07-12 17:40:38.918871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:04.657 [2024-07-12 17:40:38.918890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1931af0 (9): Bad file descriptor 00:30:04.657 [2024-07-12 17:40:38.929528] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:04.657 Running I/O for 1 seconds... 00:30:04.657 00:30:04.657 Latency(us) 00:30:04.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.657 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:04.657 Verification LBA range: start 0x0 length 0x4000 00:30:04.657 NVMe0n1 : 1.01 11547.66 45.11 0.00 0.00 11031.80 1199.01 12511.42 00:30:04.657 =================================================================================================================== 00:30:04.657 Total : 11547.66 45.11 0.00 0.00 11031.80 1199.01 12511.42 00:30:04.657 17:40:43 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.657 17:40:43 -- host/failover.sh@95 -- # grep -q NVMe0 00:30:04.657 17:40:43 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:04.913 17:40:43 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.913 17:40:43 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:05.170 17:40:44 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.428 17:40:44 -- host/failover.sh@101 -- # sleep 3 00:30:08.705 17:40:47 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:08.705 17:40:47 -- host/failover.sh@103 -- # grep -q NVMe0 00:30:08.705 17:40:47 -- host/failover.sh@108 -- # killprocess 90104 00:30:08.705 17:40:47 -- common/autotest_common.sh@926 -- # '[' -z 90104 ']' 00:30:08.705 17:40:47 -- common/autotest_common.sh@930 -- # kill -0 90104 00:30:08.705 17:40:47 -- common/autotest_common.sh@931 -- # uname 00:30:08.705 17:40:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:08.705 17:40:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90104 00:30:08.705 17:40:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:08.705 17:40:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:08.705 17:40:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90104' 00:30:08.705 killing process with pid 90104 00:30:08.705 17:40:47 -- common/autotest_common.sh@945 -- # kill 90104 00:30:08.705 17:40:47 -- common/autotest_common.sh@950 -- # wait 90104 00:30:08.962 17:40:47 -- host/failover.sh@110 -- # sync 00:30:08.962 17:40:47 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.220 17:40:47 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:09.220 17:40:47 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:09.220 17:40:47 -- host/failover.sh@116 -- # nvmftestfini 00:30:09.220 17:40:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:09.220 17:40:47 -- nvmf/common.sh@116 -- # sync 00:30:09.220 17:40:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:09.220 17:40:47 -- nvmf/common.sh@119 -- # set +e 00:30:09.220 17:40:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:09.220 17:40:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:09.220 rmmod nvme_tcp 00:30:09.220 rmmod nvme_fabrics 00:30:09.220 rmmod nvme_keyring 00:30:09.220 17:40:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:09.220 17:40:48 -- nvmf/common.sh@123 -- # set -e 00:30:09.220 17:40:48 -- nvmf/common.sh@124 -- # return 0 00:30:09.220 17:40:48 -- nvmf/common.sh@477 -- # '[' -n 86602 ']' 00:30:09.220 17:40:48 -- nvmf/common.sh@478 -- # killprocess 86602 00:30:09.220 17:40:48 -- common/autotest_common.sh@926 -- # '[' -z 86602 ']' 00:30:09.220 17:40:48 -- common/autotest_common.sh@930 -- # kill -0 86602 00:30:09.220 17:40:48 -- common/autotest_common.sh@931 -- # uname 00:30:09.220 17:40:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:09.220 17:40:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86602 00:30:09.220 17:40:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:09.220 17:40:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:09.220 17:40:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86602' 00:30:09.220 killing process with pid 86602 00:30:09.220 17:40:48 -- common/autotest_common.sh@945 -- # kill 86602 00:30:09.220 17:40:48 -- common/autotest_common.sh@950 -- # wait 86602 00:30:09.478 17:40:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:09.478 17:40:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:09.478 17:40:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:09.478 17:40:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:09.478 17:40:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:09.478 17:40:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.478 17:40:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:09.478 17:40:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.006 17:40:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:12.006 00:30:12.006 real 0m39.410s 00:30:12.006 user 2m10.653s 00:30:12.006 sys 0m7.318s 00:30:12.006 17:40:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:12.006 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:30:12.006 ************************************ 00:30:12.006 END TEST nvmf_failover 00:30:12.006 ************************************ 00:30:12.006 17:40:50 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:12.006 17:40:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:12.006 17:40:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:12.006 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:30:12.006 ************************************ 00:30:12.006 START TEST nvmf_discovery 00:30:12.006 ************************************ 00:30:12.006 17:40:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:12.006 * Looking for test storage... 00:30:12.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:12.006 17:40:50 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.006 17:40:50 -- nvmf/common.sh@7 -- # uname -s 00:30:12.006 17:40:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.006 17:40:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.006 17:40:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.006 17:40:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.006 17:40:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.006 17:40:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.006 17:40:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.006 17:40:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.006 17:40:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.006 17:40:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.006 17:40:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:12.006 17:40:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:12.006 17:40:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.006 17:40:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.006 17:40:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.006 17:40:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.006 17:40:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.006 17:40:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.006 17:40:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.006 17:40:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.006 17:40:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.006 17:40:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.006 17:40:50 -- paths/export.sh@5 -- # export PATH 00:30:12.006 17:40:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.006 17:40:50 -- nvmf/common.sh@46 -- # : 0 00:30:12.006 17:40:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:12.006 17:40:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:12.006 17:40:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:12.006 17:40:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.006 17:40:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.006 17:40:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:12.006 17:40:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:12.006 17:40:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:12.006 17:40:50 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:12.006 17:40:50 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:12.006 17:40:50 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:12.006 17:40:50 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:12.006 17:40:50 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:12.006 17:40:50 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:12.006 17:40:50 -- host/discovery.sh@25 -- # nvmftestinit 00:30:12.006 17:40:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:12.006 17:40:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.006 17:40:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:12.006 17:40:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:12.006 17:40:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:12.006 17:40:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.006 17:40:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.006 17:40:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.006 17:40:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:12.006 17:40:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:12.006 17:40:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:12.006 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:30:17.335 17:40:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:17.335 17:40:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:17.335 17:40:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:17.335 17:40:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:17.335 17:40:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:17.335 17:40:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:17.335 17:40:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:17.335 17:40:55 -- nvmf/common.sh@294 -- # net_devs=() 00:30:17.335 17:40:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:17.335 17:40:55 -- nvmf/common.sh@295 -- # e810=() 00:30:17.335 17:40:55 -- nvmf/common.sh@295 -- # local -ga e810 00:30:17.335 17:40:55 -- nvmf/common.sh@296 -- # x722=() 00:30:17.335 17:40:55 -- nvmf/common.sh@296 -- # local -ga x722 00:30:17.335 17:40:55 -- nvmf/common.sh@297 -- # mlx=() 00:30:17.335 17:40:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:17.335 17:40:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.335 17:40:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:17.335 17:40:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:17.335 17:40:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:17.335 17:40:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:17.335 17:40:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:17.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:17.335 17:40:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:17.335 17:40:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:17.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:17.335 17:40:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:17.335 17:40:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:17.335 17:40:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.335 17:40:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:17.335 17:40:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.335 17:40:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:17.335 Found net devices under 0000:af:00.0: cvl_0_0 00:30:17.335 17:40:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.335 17:40:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:17.335 17:40:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.335 17:40:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:17.335 17:40:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.335 17:40:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:17.335 Found net devices under 0000:af:00.1: cvl_0_1 00:30:17.335 17:40:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.335 17:40:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:17.335 17:40:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:17.335 17:40:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:17.335 17:40:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:17.335 17:40:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.335 17:40:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.336 17:40:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.336 17:40:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:17.336 17:40:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.336 17:40:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.336 17:40:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:17.336 17:40:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.336 17:40:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.336 17:40:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:17.336 17:40:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:17.336 17:40:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.336 17:40:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.336 17:40:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.336 17:40:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.336 17:40:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:17.336 17:40:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.336 17:40:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.336 17:40:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.336 17:40:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:17.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:30:17.336 00:30:17.336 --- 10.0.0.2 ping statistics --- 00:30:17.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.336 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:17.336 17:40:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:30:17.336 00:30:17.336 --- 10.0.0.1 ping statistics --- 00:30:17.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.336 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:30:17.336 17:40:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.336 17:40:56 -- nvmf/common.sh@410 -- # return 0 00:30:17.336 17:40:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:17.336 17:40:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.336 17:40:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:17.336 17:40:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:17.336 17:40:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.336 17:40:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:17.336 17:40:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:17.336 17:40:56 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:17.336 17:40:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:17.336 17:40:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:17.336 17:40:56 -- common/autotest_common.sh@10 -- # set +x 00:30:17.336 17:40:56 -- nvmf/common.sh@469 -- # nvmfpid=95829 00:30:17.336 17:40:56 -- nvmf/common.sh@470 -- # waitforlisten 95829 00:30:17.336 17:40:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:17.336 17:40:56 -- common/autotest_common.sh@819 -- # '[' -z 95829 ']' 00:30:17.336 17:40:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.336 17:40:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:17.336 17:40:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.336 17:40:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:17.336 17:40:56 -- common/autotest_common.sh@10 -- # set +x 00:30:17.336 [2024-07-12 17:40:56.174881] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:17.336 [2024-07-12 17:40:56.174938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.336 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.336 [2024-07-12 17:40:56.254224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.336 [2024-07-12 17:40:56.296360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:17.336 [2024-07-12 17:40:56.296496] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.336 [2024-07-12 17:40:56.296507] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.336 [2024-07-12 17:40:56.296516] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.336 [2024-07-12 17:40:56.296535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.270 17:40:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:18.270 17:40:57 -- common/autotest_common.sh@852 -- # return 0 00:30:18.270 17:40:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:18.270 17:40:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:18.270 17:40:57 -- common/autotest_common.sh@10 -- # set +x 00:30:18.270 17:40:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.270 17:40:57 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:18.270 17:40:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.270 17:40:57 -- common/autotest_common.sh@10 -- # set +x 00:30:18.271 [2024-07-12 17:40:57.143042] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.271 17:40:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.271 17:40:57 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:18.271 17:40:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.271 17:40:57 -- common/autotest_common.sh@10 -- # set +x 00:30:18.271 [2024-07-12 17:40:57.155204] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:18.271 17:40:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.271 17:40:57 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:18.271 17:40:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.271 17:40:57 -- common/autotest_common.sh@10 -- # set +x 00:30:18.271 null0 00:30:18.271 17:40:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.271 17:40:57 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:18.271 17:40:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.271 17:40:57 -- common/autotest_common.sh@10 -- # set +x 00:30:18.271 null1 00:30:18.271 17:40:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.271 17:40:57 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:18.271 17:40:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.271 17:40:57 -- common/autotest_common.sh@10 -- # set +x 00:30:18.271 17:40:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.271 17:40:57 -- host/discovery.sh@45 -- # hostpid=96021 00:30:18.271 17:40:57 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:18.271 17:40:57 -- host/discovery.sh@46 -- # waitforlisten 96021 /tmp/host.sock 00:30:18.271 17:40:57 -- common/autotest_common.sh@819 -- # '[' -z 96021 ']' 00:30:18.271 17:40:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:18.271 17:40:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:18.271 17:40:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:18.271 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:18.271 17:40:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:18.271 17:40:57 -- common/autotest_common.sh@10 -- # set +x 00:30:18.271 [2024-07-12 17:40:57.233087] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:18.271 [2024-07-12 17:40:57.233141] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96021 ] 00:30:18.529 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.529 [2024-07-12 17:40:57.313691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.529 [2024-07-12 17:40:57.355516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:18.529 [2024-07-12 17:40:57.355667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.462 17:40:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:19.462 17:40:58 -- common/autotest_common.sh@852 -- # return 0 00:30:19.462 17:40:58 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:19.462 17:40:58 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:19.462 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.462 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.462 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.463 17:40:58 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:19.463 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.463 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.463 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.463 17:40:58 -- host/discovery.sh@72 -- # notify_id=0 00:30:19.463 17:40:58 -- host/discovery.sh@78 -- # get_subsystem_names 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:19.463 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # sort 00:30:19.463 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # xargs 00:30:19.463 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.463 17:40:58 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:30:19.463 17:40:58 -- host/discovery.sh@79 -- # get_bdev_list 00:30:19.463 17:40:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.463 17:40:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:19.463 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.463 17:40:58 -- host/discovery.sh@55 -- # sort 00:30:19.463 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.463 17:40:58 -- host/discovery.sh@55 -- # xargs 00:30:19.463 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.463 17:40:58 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:30:19.463 17:40:58 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:19.463 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.463 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.463 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.463 17:40:58 -- host/discovery.sh@82 -- # get_subsystem_names 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:19.463 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # sort 00:30:19.463 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # xargs 00:30:19.463 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.463 17:40:58 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:30:19.463 17:40:58 -- host/discovery.sh@83 -- # get_bdev_list 00:30:19.463 17:40:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.463 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.463 17:40:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:19.463 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.463 17:40:58 -- host/discovery.sh@55 -- # sort 00:30:19.463 17:40:58 -- host/discovery.sh@55 -- # xargs 00:30:19.463 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.463 17:40:58 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:19.463 17:40:58 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:19.463 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.463 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.463 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.463 17:40:58 -- host/discovery.sh@86 -- # get_subsystem_names 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:19.463 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # sort 00:30:19.463 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.463 17:40:58 -- host/discovery.sh@59 -- # xargs 00:30:19.463 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.721 17:40:58 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:30:19.721 17:40:58 -- host/discovery.sh@87 -- # get_bdev_list 00:30:19.721 17:40:58 -- host/discovery.sh@55 -- # sort 00:30:19.721 17:40:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.721 17:40:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:19.721 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.721 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.721 17:40:58 -- host/discovery.sh@55 -- # xargs 00:30:19.721 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.721 17:40:58 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:19.721 17:40:58 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.721 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.721 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.721 [2024-07-12 17:40:58.518999] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.721 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.721 17:40:58 -- host/discovery.sh@92 -- # get_subsystem_names 00:30:19.721 17:40:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:19.721 17:40:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:19.721 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.721 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.721 17:40:58 -- host/discovery.sh@59 -- # sort 00:30:19.721 17:40:58 -- host/discovery.sh@59 -- # xargs 00:30:19.721 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.721 17:40:58 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:19.721 17:40:58 -- host/discovery.sh@93 -- # get_bdev_list 00:30:19.721 17:40:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.721 17:40:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:19.721 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.721 17:40:58 -- host/discovery.sh@55 -- # sort 00:30:19.721 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.721 17:40:58 -- host/discovery.sh@55 -- # xargs 00:30:19.721 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.721 17:40:58 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:30:19.721 17:40:58 -- host/discovery.sh@94 -- # get_notification_count 00:30:19.721 17:40:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:19.721 17:40:58 -- host/discovery.sh@74 -- # jq '. | length' 00:30:19.721 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.721 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.721 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.721 17:40:58 -- host/discovery.sh@74 -- # notification_count=0 00:30:19.721 17:40:58 -- host/discovery.sh@75 -- # notify_id=0 00:30:19.721 17:40:58 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:30:19.721 17:40:58 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:19.721 17:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.721 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.721 17:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.721 17:40:58 -- host/discovery.sh@100 -- # sleep 1 00:30:20.287 [2024-07-12 17:40:59.225004] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:20.287 [2024-07-12 17:40:59.225031] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:20.287 [2024-07-12 17:40:59.225048] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:20.544 [2024-07-12 17:40:59.352507] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:20.802 [2024-07-12 17:40:59.576683] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:20.802 [2024-07-12 17:40:59.576707] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:20.802 17:40:59 -- host/discovery.sh@101 -- # get_subsystem_names 00:30:20.802 17:40:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.802 17:40:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.802 17:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.802 17:40:59 -- host/discovery.sh@59 -- # sort 00:30:20.802 17:40:59 -- common/autotest_common.sh@10 -- # set +x 00:30:20.802 17:40:59 -- host/discovery.sh@59 -- # xargs 00:30:20.802 17:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.802 17:40:59 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.802 17:40:59 -- host/discovery.sh@102 -- # get_bdev_list 00:30:20.802 17:40:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.802 17:40:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.802 17:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.802 17:40:59 -- common/autotest_common.sh@10 -- # set +x 00:30:20.802 17:40:59 -- host/discovery.sh@55 -- # sort 00:30:20.802 17:40:59 -- host/discovery.sh@55 -- # xargs 00:30:20.802 17:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.060 17:40:59 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:21.060 17:40:59 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:30:21.060 17:40:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:21.060 17:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.060 17:40:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:21.060 17:40:59 -- common/autotest_common.sh@10 -- # set +x 00:30:21.060 17:40:59 -- host/discovery.sh@63 -- # sort -n 00:30:21.060 17:40:59 -- host/discovery.sh@63 -- # xargs 00:30:21.060 17:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.060 17:40:59 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:30:21.060 17:40:59 -- host/discovery.sh@104 -- # get_notification_count 00:30:21.060 17:40:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:21.060 17:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.060 17:40:59 -- host/discovery.sh@74 -- # jq '. | length' 00:30:21.060 17:40:59 -- common/autotest_common.sh@10 -- # set +x 00:30:21.060 17:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.060 17:40:59 -- host/discovery.sh@74 -- # notification_count=1 00:30:21.060 17:40:59 -- host/discovery.sh@75 -- # notify_id=1 00:30:21.060 17:40:59 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:30:21.060 17:40:59 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:21.060 17:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.060 17:40:59 -- common/autotest_common.sh@10 -- # set +x 00:30:21.060 17:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.060 17:40:59 -- host/discovery.sh@109 -- # sleep 1 00:30:21.990 17:41:00 -- host/discovery.sh@110 -- # get_bdev_list 00:30:21.991 17:41:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:21.991 17:41:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:21.991 17:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.991 17:41:00 -- host/discovery.sh@55 -- # sort 00:30:21.991 17:41:00 -- common/autotest_common.sh@10 -- # set +x 00:30:21.991 17:41:00 -- host/discovery.sh@55 -- # xargs 00:30:21.991 17:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.248 17:41:00 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:22.248 17:41:00 -- host/discovery.sh@111 -- # get_notification_count 00:30:22.248 17:41:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:22.248 17:41:00 -- host/discovery.sh@74 -- # jq '. | length' 00:30:22.248 17:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.248 17:41:00 -- common/autotest_common.sh@10 -- # set +x 00:30:22.248 17:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.248 17:41:01 -- host/discovery.sh@74 -- # notification_count=1 00:30:22.248 17:41:01 -- host/discovery.sh@75 -- # notify_id=2 00:30:22.248 17:41:01 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:30:22.248 17:41:01 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:22.248 17:41:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.248 17:41:01 -- common/autotest_common.sh@10 -- # set +x 00:30:22.248 [2024-07-12 17:41:01.018345] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:22.248 [2024-07-12 17:41:01.019164] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:22.248 [2024-07-12 17:41:01.019197] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:22.248 17:41:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.248 17:41:01 -- host/discovery.sh@117 -- # sleep 1 00:30:22.248 [2024-07-12 17:41:01.146592] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:22.506 [2024-07-12 17:41:01.413948] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:22.506 [2024-07-12 17:41:01.413971] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:22.506 [2024-07-12 17:41:01.413979] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:23.073 17:41:02 -- host/discovery.sh@118 -- # get_subsystem_names 00:30:23.073 17:41:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:23.073 17:41:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:23.073 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.073 17:41:02 -- host/discovery.sh@59 -- # sort 00:30:23.073 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:30:23.073 17:41:02 -- host/discovery.sh@59 -- # xargs 00:30:23.332 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@119 -- # get_bdev_list 00:30:23.332 17:41:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:23.332 17:41:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:23.332 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.332 17:41:02 -- host/discovery.sh@55 -- # sort 00:30:23.332 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:30:23.332 17:41:02 -- host/discovery.sh@55 -- # xargs 00:30:23.332 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:30:23.332 17:41:02 -- host/discovery.sh@63 -- # sort -n 00:30:23.332 17:41:02 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:23.332 17:41:02 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:23.332 17:41:02 -- host/discovery.sh@63 -- # xargs 00:30:23.332 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.332 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:30:23.332 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@121 -- # get_notification_count 00:30:23.332 17:41:02 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:23.332 17:41:02 -- host/discovery.sh@74 -- # jq '. | length' 00:30:23.332 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.332 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:30:23.332 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@74 -- # notification_count=0 00:30:23.332 17:41:02 -- host/discovery.sh@75 -- # notify_id=2 00:30:23.332 17:41:02 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:23.332 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.332 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:30:23.332 [2024-07-12 17:41:02.234709] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:23.332 [2024-07-12 17:41:02.234739] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:23.332 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.332 17:41:02 -- host/discovery.sh@127 -- # sleep 1 00:30:23.332 [2024-07-12 17:41:02.240492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.332 [2024-07-12 17:41:02.240517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.332 [2024-07-12 17:41:02.240529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.332 [2024-07-12 17:41:02.240539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.332 [2024-07-12 17:41:02.240550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.332 [2024-07-12 17:41:02.240561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.332 [2024-07-12 17:41:02.240573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.332 [2024-07-12 17:41:02.240583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.332 [2024-07-12 17:41:02.240594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ec0 is same with the state(5) to be set 00:30:23.332 [2024-07-12 17:41:02.250503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ec0 (9): Bad file descriptor 00:30:23.332 [2024-07-12 17:41:02.260547] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.332 [2024-07-12 17:41:02.260776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.332 [2024-07-12 17:41:02.261064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.332 [2024-07-12 17:41:02.261082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e9ec0 with addr=10.0.0.2, port=4420 00:30:23.332 [2024-07-12 17:41:02.261093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ec0 is same with the state(5) to be set 00:30:23.332 [2024-07-12 17:41:02.261115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ec0 (9): Bad file descriptor 00:30:23.332 [2024-07-12 17:41:02.261140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.332 [2024-07-12 17:41:02.261152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.332 [2024-07-12 17:41:02.261163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.332 [2024-07-12 17:41:02.261179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.332 [2024-07-12 17:41:02.270612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.332 [2024-07-12 17:41:02.270749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.332 [2024-07-12 17:41:02.271979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.332 [2024-07-12 17:41:02.272012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e9ec0 with addr=10.0.0.2, port=4420 00:30:23.332 [2024-07-12 17:41:02.272025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ec0 is same with the state(5) to be set 00:30:23.332 [2024-07-12 17:41:02.272046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ec0 (9): Bad file descriptor 00:30:23.332 [2024-07-12 17:41:02.272084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.332 [2024-07-12 17:41:02.272096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.332 [2024-07-12 17:41:02.272108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.332 [2024-07-12 17:41:02.272125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.332 [2024-07-12 17:41:02.280675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.332 [2024-07-12 17:41:02.280831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.332 [2024-07-12 17:41:02.281104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.332 [2024-07-12 17:41:02.281121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e9ec0 with addr=10.0.0.2, port=4420 00:30:23.332 [2024-07-12 17:41:02.281132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ec0 is same with the state(5) to be set 00:30:23.332 [2024-07-12 17:41:02.281149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ec0 (9): Bad file descriptor 00:30:23.332 [2024-07-12 17:41:02.281186] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.332 [2024-07-12 17:41:02.281198] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.332 [2024-07-12 17:41:02.281209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.332 [2024-07-12 17:41:02.281224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.332 [2024-07-12 17:41:02.290741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.332 [2024-07-12 17:41:02.290953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.332 [2024-07-12 17:41:02.291104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.332 [2024-07-12 17:41:02.291121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e9ec0 with addr=10.0.0.2, port=4420 00:30:23.332 [2024-07-12 17:41:02.291132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ec0 is same with the state(5) to be set 00:30:23.332 [2024-07-12 17:41:02.291149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ec0 (9): Bad file descriptor 00:30:23.332 [2024-07-12 17:41:02.291170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.332 [2024-07-12 17:41:02.291179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.332 [2024-07-12 17:41:02.291190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.332 [2024-07-12 17:41:02.291206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.591 [2024-07-12 17:41:02.300804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.591 [2024-07-12 17:41:02.300966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.591 [2024-07-12 17:41:02.301123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.591 [2024-07-12 17:41:02.301138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e9ec0 with addr=10.0.0.2, port=4420 00:30:23.591 [2024-07-12 17:41:02.301149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ec0 is same with the state(5) to be set 00:30:23.591 [2024-07-12 17:41:02.301165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ec0 (9): Bad file descriptor 00:30:23.591 [2024-07-12 17:41:02.301179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.591 [2024-07-12 17:41:02.301189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.591 [2024-07-12 17:41:02.301199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.591 [2024-07-12 17:41:02.301214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.591 [2024-07-12 17:41:02.310866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.591 [2024-07-12 17:41:02.311113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.591 [2024-07-12 17:41:02.311308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.591 [2024-07-12 17:41:02.311325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e9ec0 with addr=10.0.0.2, port=4420 00:30:23.591 [2024-07-12 17:41:02.311335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ec0 is same with the state(5) to be set 00:30:23.591 [2024-07-12 17:41:02.311351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ec0 (9): Bad file descriptor 00:30:23.591 [2024-07-12 17:41:02.311377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.591 [2024-07-12 17:41:02.311387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.591 [2024-07-12 17:41:02.311398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.591 [2024-07-12 17:41:02.311413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.591 [2024-07-12 17:41:02.320925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.591 [2024-07-12 17:41:02.321191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.591 [2024-07-12 17:41:02.321422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.591 [2024-07-12 17:41:02.321438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e9ec0 with addr=10.0.0.2, port=4420 00:30:23.591 [2024-07-12 17:41:02.321449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ec0 is same with the state(5) to be set 00:30:23.591 [2024-07-12 17:41:02.321465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ec0 (9): Bad file descriptor 00:30:23.591 [2024-07-12 17:41:02.321500] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:23.591 [2024-07-12 17:41:02.321524] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:23.591 [2024-07-12 17:41:02.321549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.591 [2024-07-12 17:41:02.321559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.591 [2024-07-12 17:41:02.321569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.591 [2024-07-12 17:41:02.321587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.524 17:41:03 -- host/discovery.sh@128 -- # get_subsystem_names 00:30:24.524 17:41:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:24.524 17:41:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:24.524 17:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.524 17:41:03 -- common/autotest_common.sh@10 -- # set +x 00:30:24.524 17:41:03 -- host/discovery.sh@59 -- # sort 00:30:24.524 17:41:03 -- host/discovery.sh@59 -- # xargs 00:30:24.524 17:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@129 -- # get_bdev_list 00:30:24.524 17:41:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:24.524 17:41:03 -- host/discovery.sh@55 -- # xargs 00:30:24.524 17:41:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:24.524 17:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.524 17:41:03 -- host/discovery.sh@55 -- # sort 00:30:24.524 17:41:03 -- common/autotest_common.sh@10 -- # set +x 00:30:24.524 17:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:30:24.524 17:41:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:24.524 17:41:03 -- host/discovery.sh@63 -- # sort -n 00:30:24.524 17:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.524 17:41:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:24.524 17:41:03 -- common/autotest_common.sh@10 -- # set +x 00:30:24.524 17:41:03 -- host/discovery.sh@63 -- # xargs 00:30:24.524 17:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@131 -- # get_notification_count 00:30:24.524 17:41:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:24.524 17:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.524 17:41:03 -- common/autotest_common.sh@10 -- # set +x 00:30:24.524 17:41:03 -- host/discovery.sh@74 -- # jq '. | length' 00:30:24.524 17:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@74 -- # notification_count=0 00:30:24.524 17:41:03 -- host/discovery.sh@75 -- # notify_id=2 00:30:24.524 17:41:03 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:24.524 17:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.524 17:41:03 -- common/autotest_common.sh@10 -- # set +x 00:30:24.524 17:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.524 17:41:03 -- host/discovery.sh@135 -- # sleep 1 00:30:25.895 17:41:04 -- host/discovery.sh@136 -- # get_subsystem_names 00:30:25.895 17:41:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:25.895 17:41:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:25.895 17:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.895 17:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:25.895 17:41:04 -- host/discovery.sh@59 -- # sort 00:30:25.895 17:41:04 -- host/discovery.sh@59 -- # xargs 00:30:25.895 17:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.895 17:41:04 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:30:25.895 17:41:04 -- host/discovery.sh@137 -- # get_bdev_list 00:30:25.895 17:41:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:25.895 17:41:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:25.895 17:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.895 17:41:04 -- host/discovery.sh@55 -- # sort 00:30:25.895 17:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:25.895 17:41:04 -- host/discovery.sh@55 -- # xargs 00:30:25.895 17:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.895 17:41:04 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:30:25.895 17:41:04 -- host/discovery.sh@138 -- # get_notification_count 00:30:25.895 17:41:04 -- host/discovery.sh@74 -- # jq '. | length' 00:30:25.895 17:41:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:25.895 17:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.895 17:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:25.895 17:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.895 17:41:04 -- host/discovery.sh@74 -- # notification_count=2 00:30:25.895 17:41:04 -- host/discovery.sh@75 -- # notify_id=4 00:30:25.895 17:41:04 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:30:25.895 17:41:04 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.895 17:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.895 17:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:26.828 [2024-07-12 17:41:05.662435] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:26.828 [2024-07-12 17:41:05.662456] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:26.828 [2024-07-12 17:41:05.662472] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:26.828 [2024-07-12 17:41:05.748754] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:27.394 [2024-07-12 17:41:06.059613] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:27.394 [2024-07-12 17:41:06.059647] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:27.394 17:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:27.394 17:41:06 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.394 17:41:06 -- common/autotest_common.sh@640 -- # local es=0 00:30:27.394 17:41:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.394 17:41:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:27.394 17:41:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:27.394 17:41:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:27.394 17:41:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:27.394 17:41:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.394 17:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.394 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:30:27.394 request: 00:30:27.394 { 00:30:27.394 "name": "nvme", 00:30:27.394 "trtype": "tcp", 00:30:27.394 "traddr": "10.0.0.2", 00:30:27.394 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:27.394 "adrfam": "ipv4", 00:30:27.394 "trsvcid": "8009", 00:30:27.394 "wait_for_attach": true, 00:30:27.394 "method": "bdev_nvme_start_discovery", 00:30:27.394 "req_id": 1 00:30:27.394 } 00:30:27.394 Got JSON-RPC error response 00:30:27.394 response: 00:30:27.394 { 00:30:27.394 "code": -17, 00:30:27.394 "message": "File exists" 00:30:27.394 } 00:30:27.394 17:41:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:27.394 17:41:06 -- common/autotest_common.sh@643 -- # es=1 00:30:27.394 17:41:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:27.394 17:41:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:27.394 17:41:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:27.394 17:41:06 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:30:27.394 17:41:06 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:27.394 17:41:06 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:27.394 17:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.394 17:41:06 -- host/discovery.sh@67 -- # sort 00:30:27.394 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:30:27.394 17:41:06 -- host/discovery.sh@67 -- # xargs 00:30:27.394 17:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:27.394 17:41:06 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:30:27.394 17:41:06 -- host/discovery.sh@147 -- # get_bdev_list 00:30:27.394 17:41:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:27.394 17:41:06 -- host/discovery.sh@55 -- # xargs 00:30:27.394 17:41:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:27.394 17:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.394 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:30:27.394 17:41:06 -- host/discovery.sh@55 -- # sort 00:30:27.394 17:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:27.394 17:41:06 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:27.394 17:41:06 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.394 17:41:06 -- common/autotest_common.sh@640 -- # local es=0 00:30:27.394 17:41:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.394 17:41:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:27.394 17:41:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:27.394 17:41:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:27.394 17:41:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:27.394 17:41:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.394 17:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.394 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:30:27.394 request: 00:30:27.394 { 00:30:27.394 "name": "nvme_second", 00:30:27.394 "trtype": "tcp", 00:30:27.394 "traddr": "10.0.0.2", 00:30:27.394 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:27.394 "adrfam": "ipv4", 00:30:27.394 "trsvcid": "8009", 00:30:27.394 "wait_for_attach": true, 00:30:27.394 "method": "bdev_nvme_start_discovery", 00:30:27.394 "req_id": 1 00:30:27.394 } 00:30:27.394 Got JSON-RPC error response 00:30:27.394 response: 00:30:27.394 { 00:30:27.394 "code": -17, 00:30:27.394 "message": "File exists" 00:30:27.394 } 00:30:27.394 17:41:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:27.394 17:41:06 -- common/autotest_common.sh@643 -- # es=1 00:30:27.394 17:41:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:27.394 17:41:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:27.394 17:41:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:27.394 17:41:06 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:30:27.394 17:41:06 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:27.394 17:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.395 17:41:06 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:27.395 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:30:27.395 17:41:06 -- host/discovery.sh@67 -- # sort 00:30:27.395 17:41:06 -- host/discovery.sh@67 -- # xargs 00:30:27.395 17:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:27.395 17:41:06 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:30:27.395 17:41:06 -- host/discovery.sh@153 -- # get_bdev_list 00:30:27.395 17:41:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:27.395 17:41:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:27.395 17:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.395 17:41:06 -- host/discovery.sh@55 -- # sort 00:30:27.395 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:30:27.395 17:41:06 -- host/discovery.sh@55 -- # xargs 00:30:27.395 17:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:27.395 17:41:06 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:27.395 17:41:06 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:27.395 17:41:06 -- common/autotest_common.sh@640 -- # local es=0 00:30:27.395 17:41:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:27.395 17:41:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:27.395 17:41:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:27.395 17:41:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:27.395 17:41:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:27.395 17:41:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:27.395 17:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.395 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:30:28.766 [2024-07-12 17:41:07.315068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.766 [2024-07-12 17:41:07.315247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.766 [2024-07-12 17:41:07.315270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181bd80 with addr=10.0.0.2, port=8010 00:30:28.767 [2024-07-12 17:41:07.315284] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:28.767 [2024-07-12 17:41:07.315294] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:28.767 [2024-07-12 17:41:07.315305] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:29.699 [2024-07-12 17:41:08.317615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.699 [2024-07-12 17:41:08.317759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.699 [2024-07-12 17:41:08.317775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1804d40 with addr=10.0.0.2, port=8010 00:30:29.699 [2024-07-12 17:41:08.317789] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:29.699 [2024-07-12 17:41:08.317800] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:29.699 [2024-07-12 17:41:08.317810] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:30.634 [2024-07-12 17:41:09.319793] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:30.634 request: 00:30:30.634 { 00:30:30.634 "name": "nvme_second", 00:30:30.634 "trtype": "tcp", 00:30:30.634 "traddr": "10.0.0.2", 00:30:30.634 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:30.634 "adrfam": "ipv4", 00:30:30.634 "trsvcid": "8010", 00:30:30.634 "attach_timeout_ms": 3000, 00:30:30.634 "method": "bdev_nvme_start_discovery", 00:30:30.634 "req_id": 1 00:30:30.634 } 00:30:30.634 Got JSON-RPC error response 00:30:30.634 response: 00:30:30.634 { 00:30:30.634 "code": -110, 00:30:30.634 "message": "Connection timed out" 00:30:30.634 } 00:30:30.634 17:41:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:30.634 17:41:09 -- common/autotest_common.sh@643 -- # es=1 00:30:30.634 17:41:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:30.634 17:41:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:30.634 17:41:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:30.634 17:41:09 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:30:30.634 17:41:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:30.634 17:41:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:30.634 17:41:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:30.634 17:41:09 -- host/discovery.sh@67 -- # sort 00:30:30.634 17:41:09 -- common/autotest_common.sh@10 -- # set +x 00:30:30.634 17:41:09 -- host/discovery.sh@67 -- # xargs 00:30:30.634 17:41:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:30.634 17:41:09 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:30:30.634 17:41:09 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:30:30.634 17:41:09 -- host/discovery.sh@162 -- # kill 96021 00:30:30.634 17:41:09 -- host/discovery.sh@163 -- # nvmftestfini 00:30:30.634 17:41:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:30.634 17:41:09 -- nvmf/common.sh@116 -- # sync 00:30:30.634 17:41:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:30.634 17:41:09 -- nvmf/common.sh@119 -- # set +e 00:30:30.634 17:41:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:30.634 17:41:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:30.634 rmmod nvme_tcp 00:30:30.634 rmmod nvme_fabrics 00:30:30.634 rmmod nvme_keyring 00:30:30.634 17:41:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:30.634 17:41:09 -- nvmf/common.sh@123 -- # set -e 00:30:30.634 17:41:09 -- nvmf/common.sh@124 -- # return 0 00:30:30.634 17:41:09 -- nvmf/common.sh@477 -- # '[' -n 95829 ']' 00:30:30.634 17:41:09 -- nvmf/common.sh@478 -- # killprocess 95829 00:30:30.634 17:41:09 -- common/autotest_common.sh@926 -- # '[' -z 95829 ']' 00:30:30.634 17:41:09 -- common/autotest_common.sh@930 -- # kill -0 95829 00:30:30.635 17:41:09 -- common/autotest_common.sh@931 -- # uname 00:30:30.635 17:41:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:30.635 17:41:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95829 00:30:30.635 17:41:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:30.635 17:41:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:30.635 17:41:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95829' 00:30:30.635 killing process with pid 95829 00:30:30.635 17:41:09 -- common/autotest_common.sh@945 -- # kill 95829 00:30:30.635 17:41:09 -- common/autotest_common.sh@950 -- # wait 95829 00:30:30.893 17:41:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:30.893 17:41:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:30.893 17:41:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:30.893 17:41:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:30.893 17:41:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:30.893 17:41:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.893 17:41:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.893 17:41:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.795 17:41:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:32.795 00:30:32.795 real 0m21.333s 00:30:32.795 user 0m29.052s 00:30:32.795 sys 0m5.692s 00:30:32.795 17:41:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.795 17:41:11 -- common/autotest_common.sh@10 -- # set +x 00:30:32.795 ************************************ 00:30:32.795 END TEST nvmf_discovery 00:30:32.795 ************************************ 00:30:33.053 17:41:11 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:33.053 17:41:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:33.053 17:41:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:33.053 17:41:11 -- common/autotest_common.sh@10 -- # set +x 00:30:33.053 ************************************ 00:30:33.053 START TEST nvmf_discovery_remove_ifc 00:30:33.053 ************************************ 00:30:33.053 17:41:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:33.053 * Looking for test storage... 00:30:33.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.053 17:41:11 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.053 17:41:11 -- nvmf/common.sh@7 -- # uname -s 00:30:33.053 17:41:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.053 17:41:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.053 17:41:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.053 17:41:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.053 17:41:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.053 17:41:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.053 17:41:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.053 17:41:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.053 17:41:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.053 17:41:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.053 17:41:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:33.053 17:41:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:33.053 17:41:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.053 17:41:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.053 17:41:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.053 17:41:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.053 17:41:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.053 17:41:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.053 17:41:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.053 17:41:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.053 17:41:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.054 17:41:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.054 17:41:11 -- paths/export.sh@5 -- # export PATH 00:30:33.054 17:41:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.054 17:41:11 -- nvmf/common.sh@46 -- # : 0 00:30:33.054 17:41:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:33.054 17:41:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:33.054 17:41:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:33.054 17:41:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.054 17:41:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.054 17:41:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:33.054 17:41:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:33.054 17:41:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:33.054 17:41:11 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:33.054 17:41:11 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:33.054 17:41:11 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:33.054 17:41:11 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:33.054 17:41:11 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:33.054 17:41:11 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:33.054 17:41:11 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:33.054 17:41:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:33.054 17:41:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.054 17:41:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:33.054 17:41:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:33.054 17:41:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:33.054 17:41:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.054 17:41:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.054 17:41:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.054 17:41:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:33.054 17:41:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:33.054 17:41:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:33.054 17:41:11 -- common/autotest_common.sh@10 -- # set +x 00:30:38.318 17:41:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:38.318 17:41:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:38.318 17:41:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:38.318 17:41:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:38.318 17:41:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:38.319 17:41:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:38.319 17:41:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:38.319 17:41:17 -- nvmf/common.sh@294 -- # net_devs=() 00:30:38.319 17:41:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:38.319 17:41:17 -- nvmf/common.sh@295 -- # e810=() 00:30:38.319 17:41:17 -- nvmf/common.sh@295 -- # local -ga e810 00:30:38.319 17:41:17 -- nvmf/common.sh@296 -- # x722=() 00:30:38.319 17:41:17 -- nvmf/common.sh@296 -- # local -ga x722 00:30:38.319 17:41:17 -- nvmf/common.sh@297 -- # mlx=() 00:30:38.319 17:41:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:38.319 17:41:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.319 17:41:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:38.319 17:41:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:38.319 17:41:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:38.319 17:41:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:38.319 17:41:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:38.319 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:38.319 17:41:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:38.319 17:41:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:38.319 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:38.319 17:41:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:38.319 17:41:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:38.319 17:41:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.319 17:41:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:38.319 17:41:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.319 17:41:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:38.319 Found net devices under 0000:af:00.0: cvl_0_0 00:30:38.319 17:41:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.319 17:41:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:38.319 17:41:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.319 17:41:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:38.319 17:41:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.319 17:41:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:38.319 Found net devices under 0000:af:00.1: cvl_0_1 00:30:38.319 17:41:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.319 17:41:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:38.319 17:41:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:38.319 17:41:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:38.319 17:41:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:38.319 17:41:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.319 17:41:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.319 17:41:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.319 17:41:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:38.319 17:41:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.319 17:41:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.319 17:41:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:38.319 17:41:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.319 17:41:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.319 17:41:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:38.319 17:41:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:38.319 17:41:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.319 17:41:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.577 17:41:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.577 17:41:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.577 17:41:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:38.577 17:41:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.577 17:41:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.577 17:41:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.577 17:41:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:38.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:30:38.577 00:30:38.577 --- 10.0.0.2 ping statistics --- 00:30:38.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.578 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:30:38.578 17:41:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:30:38.578 00:30:38.578 --- 10.0.0.1 ping statistics --- 00:30:38.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.578 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:38.578 17:41:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.578 17:41:17 -- nvmf/common.sh@410 -- # return 0 00:30:38.578 17:41:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:38.578 17:41:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.578 17:41:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:38.578 17:41:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:38.578 17:41:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.578 17:41:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:38.578 17:41:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:38.578 17:41:17 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:38.578 17:41:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:38.578 17:41:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:38.578 17:41:17 -- common/autotest_common.sh@10 -- # set +x 00:30:38.578 17:41:17 -- nvmf/common.sh@469 -- # nvmfpid=102045 00:30:38.578 17:41:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:38.578 17:41:17 -- nvmf/common.sh@470 -- # waitforlisten 102045 00:30:38.578 17:41:17 -- common/autotest_common.sh@819 -- # '[' -z 102045 ']' 00:30:38.578 17:41:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.578 17:41:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:38.578 17:41:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.578 17:41:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:38.578 17:41:17 -- common/autotest_common.sh@10 -- # set +x 00:30:38.838 [2024-07-12 17:41:17.550365] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:38.838 [2024-07-12 17:41:17.550420] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.838 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.838 [2024-07-12 17:41:17.627685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.838 [2024-07-12 17:41:17.669522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:38.838 [2024-07-12 17:41:17.669665] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.838 [2024-07-12 17:41:17.669681] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.838 [2024-07-12 17:41:17.669690] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.838 [2024-07-12 17:41:17.669718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.773 17:41:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:39.773 17:41:18 -- common/autotest_common.sh@852 -- # return 0 00:30:39.773 17:41:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:39.774 17:41:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:39.774 17:41:18 -- common/autotest_common.sh@10 -- # set +x 00:30:39.774 17:41:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.774 17:41:18 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:39.774 17:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:39.774 17:41:18 -- common/autotest_common.sh@10 -- # set +x 00:30:39.774 [2024-07-12 17:41:18.529510] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.774 [2024-07-12 17:41:18.537688] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:39.774 null0 00:30:39.774 [2024-07-12 17:41:18.569699] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.774 17:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:39.774 17:41:18 -- host/discovery_remove_ifc.sh@59 -- # hostpid=102142 00:30:39.774 17:41:18 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 102142 /tmp/host.sock 00:30:39.774 17:41:18 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:39.774 17:41:18 -- common/autotest_common.sh@819 -- # '[' -z 102142 ']' 00:30:39.774 17:41:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:39.774 17:41:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:39.774 17:41:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:39.774 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:39.774 17:41:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:39.774 17:41:18 -- common/autotest_common.sh@10 -- # set +x 00:30:39.774 [2024-07-12 17:41:18.640148] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:39.774 [2024-07-12 17:41:18.640204] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102142 ] 00:30:39.774 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.774 [2024-07-12 17:41:18.712909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.031 [2024-07-12 17:41:18.755365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:40.031 [2024-07-12 17:41:18.755515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.031 17:41:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:40.031 17:41:18 -- common/autotest_common.sh@852 -- # return 0 00:30:40.031 17:41:18 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:40.031 17:41:18 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:40.032 17:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.032 17:41:18 -- common/autotest_common.sh@10 -- # set +x 00:30:40.032 17:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.032 17:41:18 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:40.032 17:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.032 17:41:18 -- common/autotest_common.sh@10 -- # set +x 00:30:40.032 17:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.032 17:41:18 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:40.032 17:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.032 17:41:18 -- common/autotest_common.sh@10 -- # set +x 00:30:41.402 [2024-07-12 17:41:19.946312] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:41.402 [2024-07-12 17:41:19.946343] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:41.402 [2024-07-12 17:41:19.946360] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:41.402 [2024-07-12 17:41:20.073798] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:41.402 [2024-07-12 17:41:20.258913] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:41.402 [2024-07-12 17:41:20.258962] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:41.402 [2024-07-12 17:41:20.258991] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:41.402 [2024-07-12 17:41:20.259009] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:41.402 [2024-07-12 17:41:20.259038] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:41.402 17:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.402 17:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:41.402 [2024-07-12 17:41:20.265625] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x75a780 was disconnected and freed. delete nvme_qpair. 00:30:41.402 17:41:20 -- common/autotest_common.sh@10 -- # set +x 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:41.402 17:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:41.402 17:41:20 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:41.660 17:41:20 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:41.660 17:41:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:41.660 17:41:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.660 17:41:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:41.660 17:41:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:41.660 17:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.660 17:41:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:41.660 17:41:20 -- common/autotest_common.sh@10 -- # set +x 00:30:41.660 17:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.660 17:41:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:41.660 17:41:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:42.595 17:41:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:42.595 17:41:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.595 17:41:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:42.595 17:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.595 17:41:21 -- common/autotest_common.sh@10 -- # set +x 00:30:42.595 17:41:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:42.595 17:41:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:42.595 17:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.595 17:41:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:42.595 17:41:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:43.968 17:41:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:43.968 17:41:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.968 17:41:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:43.968 17:41:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:43.968 17:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:43.968 17:41:22 -- common/autotest_common.sh@10 -- # set +x 00:30:43.968 17:41:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:43.968 17:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:43.968 17:41:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:43.968 17:41:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:44.899 17:41:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:44.899 17:41:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.899 17:41:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:44.899 17:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.899 17:41:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:44.899 17:41:23 -- common/autotest_common.sh@10 -- # set +x 00:30:44.899 17:41:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:44.899 17:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.899 17:41:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:44.899 17:41:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:45.832 17:41:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:45.832 17:41:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:45.832 17:41:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.832 17:41:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:45.832 17:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.832 17:41:24 -- common/autotest_common.sh@10 -- # set +x 00:30:45.832 17:41:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:45.832 17:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.832 17:41:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:45.832 17:41:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:46.769 [2024-07-12 17:41:25.699740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:46.769 [2024-07-12 17:41:25.699786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.769 [2024-07-12 17:41:25.699800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.769 [2024-07-12 17:41:25.699813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.769 [2024-07-12 17:41:25.699822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.769 [2024-07-12 17:41:25.699833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.769 [2024-07-12 17:41:25.699842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.769 [2024-07-12 17:41:25.699853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.769 [2024-07-12 17:41:25.699862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.769 [2024-07-12 17:41:25.699873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.769 [2024-07-12 17:41:25.699882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.769 [2024-07-12 17:41:25.699893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x720b80 is same with the state(5) to be set 00:30:46.769 17:41:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:46.769 [2024-07-12 17:41:25.709760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x720b80 (9): Bad file descriptor 00:30:46.769 17:41:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.769 17:41:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:46.769 17:41:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:46.769 17:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.769 17:41:25 -- common/autotest_common.sh@10 -- # set +x 00:30:46.769 17:41:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:46.769 [2024-07-12 17:41:25.719807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.769 17:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.027 17:41:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:47.027 17:41:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:47.960 [2024-07-12 17:41:26.722288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:47.960 17:41:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:47.960 17:41:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.960 17:41:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:47.960 17:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.960 17:41:26 -- common/autotest_common.sh@10 -- # set +x 00:30:47.960 17:41:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:47.960 17:41:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:48.894 [2024-07-12 17:41:27.746311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:48.894 [2024-07-12 17:41:27.746382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x720b80 with addr=10.0.0.2, port=4420 00:30:48.894 [2024-07-12 17:41:27.746413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x720b80 is same with the state(5) to be set 00:30:48.894 [2024-07-12 17:41:27.746453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:48.894 [2024-07-12 17:41:27.746475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:48.894 [2024-07-12 17:41:27.746494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:48.894 [2024-07-12 17:41:27.746514] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:48.894 [2024-07-12 17:41:27.747335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x720b80 (9): Bad file descriptor 00:30:48.894 [2024-07-12 17:41:27.747388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.894 [2024-07-12 17:41:27.747433] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:48.894 [2024-07-12 17:41:27.747481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.894 [2024-07-12 17:41:27.747508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.894 [2024-07-12 17:41:27.747534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.894 [2024-07-12 17:41:27.747556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.894 [2024-07-12 17:41:27.747581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.894 [2024-07-12 17:41:27.747602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.894 [2024-07-12 17:41:27.747624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.894 [2024-07-12 17:41:27.747646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.894 [2024-07-12 17:41:27.747669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.894 [2024-07-12 17:41:27.747690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.894 [2024-07-12 17:41:27.747711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:48.894 [2024-07-12 17:41:27.747765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x720f90 (9): Bad file descriptor 00:30:48.894 [2024-07-12 17:41:27.748768] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:48.894 [2024-07-12 17:41:27.748800] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:48.894 17:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.894 17:41:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:48.894 17:41:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:49.827 17:41:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:49.827 17:41:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.827 17:41:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:49.827 17:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.827 17:41:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:49.827 17:41:28 -- common/autotest_common.sh@10 -- # set +x 00:30:49.827 17:41:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:49.827 17:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:50.085 17:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:50.085 17:41:28 -- common/autotest_common.sh@10 -- # set +x 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:50.085 17:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:50.085 17:41:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:51.017 [2024-07-12 17:41:29.802930] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:51.017 [2024-07-12 17:41:29.802951] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:51.017 [2024-07-12 17:41:29.802968] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:51.017 [2024-07-12 17:41:29.932477] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:51.017 17:41:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.017 17:41:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.017 17:41:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.017 17:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.017 17:41:29 -- common/autotest_common.sh@10 -- # set +x 00:30:51.017 17:41:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.017 17:41:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.275 17:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.275 17:41:30 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:51.275 17:41:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:51.275 [2024-07-12 17:41:30.034466] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:51.275 [2024-07-12 17:41:30.034512] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:51.275 [2024-07-12 17:41:30.034535] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:51.275 [2024-07-12 17:41:30.034552] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:51.275 [2024-07-12 17:41:30.034563] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:51.275 [2024-07-12 17:41:30.041519] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x72e7a0 was disconnected and freed. delete nvme_qpair. 00:30:52.210 17:41:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:52.210 17:41:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.210 17:41:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:52.210 17:41:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:52.210 17:41:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.210 17:41:31 -- common/autotest_common.sh@10 -- # set +x 00:30:52.210 17:41:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:52.210 17:41:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.210 17:41:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:52.210 17:41:31 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:52.210 17:41:31 -- host/discovery_remove_ifc.sh@90 -- # killprocess 102142 00:30:52.210 17:41:31 -- common/autotest_common.sh@926 -- # '[' -z 102142 ']' 00:30:52.210 17:41:31 -- common/autotest_common.sh@930 -- # kill -0 102142 00:30:52.210 17:41:31 -- common/autotest_common.sh@931 -- # uname 00:30:52.210 17:41:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:52.210 17:41:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102142 00:30:52.210 17:41:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:52.210 17:41:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:52.210 17:41:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102142' 00:30:52.210 killing process with pid 102142 00:30:52.210 17:41:31 -- common/autotest_common.sh@945 -- # kill 102142 00:30:52.210 17:41:31 -- common/autotest_common.sh@950 -- # wait 102142 00:30:52.469 17:41:31 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:52.469 17:41:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:52.469 17:41:31 -- nvmf/common.sh@116 -- # sync 00:30:52.469 17:41:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:52.469 17:41:31 -- nvmf/common.sh@119 -- # set +e 00:30:52.469 17:41:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:52.469 17:41:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:52.469 rmmod nvme_tcp 00:30:52.469 rmmod nvme_fabrics 00:30:52.469 rmmod nvme_keyring 00:30:52.469 17:41:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:52.469 17:41:31 -- nvmf/common.sh@123 -- # set -e 00:30:52.469 17:41:31 -- nvmf/common.sh@124 -- # return 0 00:30:52.469 17:41:31 -- nvmf/common.sh@477 -- # '[' -n 102045 ']' 00:30:52.469 17:41:31 -- nvmf/common.sh@478 -- # killprocess 102045 00:30:52.469 17:41:31 -- common/autotest_common.sh@926 -- # '[' -z 102045 ']' 00:30:52.469 17:41:31 -- common/autotest_common.sh@930 -- # kill -0 102045 00:30:52.469 17:41:31 -- common/autotest_common.sh@931 -- # uname 00:30:52.469 17:41:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:52.469 17:41:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102045 00:30:52.469 17:41:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:52.469 17:41:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:52.469 17:41:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102045' 00:30:52.469 killing process with pid 102045 00:30:52.469 17:41:31 -- common/autotest_common.sh@945 -- # kill 102045 00:30:52.469 17:41:31 -- common/autotest_common.sh@950 -- # wait 102045 00:30:52.727 17:41:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:52.727 17:41:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:52.727 17:41:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:52.727 17:41:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.727 17:41:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:52.727 17:41:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.727 17:41:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.727 17:41:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.320 17:41:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:55.320 00:30:55.320 real 0m21.883s 00:30:55.320 user 0m26.865s 00:30:55.320 sys 0m5.490s 00:30:55.320 17:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.320 17:41:33 -- common/autotest_common.sh@10 -- # set +x 00:30:55.320 ************************************ 00:30:55.320 END TEST nvmf_discovery_remove_ifc 00:30:55.320 ************************************ 00:30:55.320 17:41:33 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:30:55.320 17:41:33 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:55.320 17:41:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:55.320 17:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.320 17:41:33 -- common/autotest_common.sh@10 -- # set +x 00:30:55.320 ************************************ 00:30:55.320 START TEST nvmf_digest 00:30:55.320 ************************************ 00:30:55.320 17:41:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:55.320 * Looking for test storage... 00:30:55.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.320 17:41:33 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.320 17:41:33 -- nvmf/common.sh@7 -- # uname -s 00:30:55.320 17:41:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.320 17:41:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.320 17:41:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.320 17:41:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.320 17:41:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.320 17:41:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.320 17:41:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.320 17:41:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.320 17:41:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.320 17:41:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.320 17:41:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:55.320 17:41:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:55.320 17:41:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.320 17:41:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.320 17:41:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.320 17:41:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.320 17:41:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.320 17:41:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.321 17:41:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.321 17:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.321 17:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.321 17:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.321 17:41:33 -- paths/export.sh@5 -- # export PATH 00:30:55.321 17:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.321 17:41:33 -- nvmf/common.sh@46 -- # : 0 00:30:55.321 17:41:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:55.321 17:41:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:55.321 17:41:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:55.321 17:41:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.321 17:41:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.321 17:41:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:55.321 17:41:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:55.321 17:41:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:55.321 17:41:33 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:55.321 17:41:33 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:55.321 17:41:33 -- host/digest.sh@16 -- # runtime=2 00:30:55.321 17:41:33 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:30:55.321 17:41:33 -- host/digest.sh@132 -- # nvmftestinit 00:30:55.321 17:41:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:55.321 17:41:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.321 17:41:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:55.321 17:41:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:55.321 17:41:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:55.321 17:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.321 17:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:55.321 17:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.321 17:41:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:55.321 17:41:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:55.321 17:41:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:55.321 17:41:33 -- common/autotest_common.sh@10 -- # set +x 00:31:00.625 17:41:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:00.625 17:41:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:00.625 17:41:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:00.625 17:41:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:00.625 17:41:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:00.625 17:41:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:00.625 17:41:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:00.625 17:41:39 -- nvmf/common.sh@294 -- # net_devs=() 00:31:00.625 17:41:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:00.625 17:41:39 -- nvmf/common.sh@295 -- # e810=() 00:31:00.625 17:41:39 -- nvmf/common.sh@295 -- # local -ga e810 00:31:00.625 17:41:39 -- nvmf/common.sh@296 -- # x722=() 00:31:00.625 17:41:39 -- nvmf/common.sh@296 -- # local -ga x722 00:31:00.625 17:41:39 -- nvmf/common.sh@297 -- # mlx=() 00:31:00.625 17:41:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:00.625 17:41:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.625 17:41:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:00.625 17:41:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:00.625 17:41:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:00.625 17:41:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:00.625 17:41:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:00.625 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:00.625 17:41:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:00.625 17:41:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:00.625 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:00.625 17:41:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:00.625 17:41:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:00.625 17:41:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.625 17:41:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:00.625 17:41:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.625 17:41:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:00.625 Found net devices under 0000:af:00.0: cvl_0_0 00:31:00.625 17:41:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.625 17:41:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:00.625 17:41:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.625 17:41:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:00.625 17:41:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.625 17:41:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:00.625 Found net devices under 0000:af:00.1: cvl_0_1 00:31:00.625 17:41:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.625 17:41:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:00.625 17:41:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:00.625 17:41:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:00.625 17:41:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.625 17:41:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.625 17:41:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.625 17:41:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:00.625 17:41:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.625 17:41:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.625 17:41:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:00.625 17:41:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.625 17:41:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.625 17:41:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:00.625 17:41:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:00.625 17:41:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.625 17:41:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.625 17:41:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.625 17:41:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.625 17:41:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:00.625 17:41:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.625 17:41:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.625 17:41:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.625 17:41:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:00.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:31:00.625 00:31:00.625 --- 10.0.0.2 ping statistics --- 00:31:00.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.625 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:31:00.625 17:41:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:31:00.625 00:31:00.625 --- 10.0.0.1 ping statistics --- 00:31:00.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.625 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:00.625 17:41:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.625 17:41:39 -- nvmf/common.sh@410 -- # return 0 00:31:00.625 17:41:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:00.625 17:41:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.625 17:41:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:00.625 17:41:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.625 17:41:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:00.625 17:41:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:00.625 17:41:39 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:00.625 17:41:39 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:31:00.625 17:41:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:00.625 17:41:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:00.625 17:41:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.625 ************************************ 00:31:00.625 START TEST nvmf_digest_clean 00:31:00.625 ************************************ 00:31:00.625 17:41:39 -- common/autotest_common.sh@1104 -- # run_digest 00:31:00.625 17:41:39 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:31:00.625 17:41:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:00.625 17:41:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:00.625 17:41:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.625 17:41:39 -- nvmf/common.sh@469 -- # nvmfpid=108019 00:31:00.625 17:41:39 -- nvmf/common.sh@470 -- # waitforlisten 108019 00:31:00.625 17:41:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:00.625 17:41:39 -- common/autotest_common.sh@819 -- # '[' -z 108019 ']' 00:31:00.625 17:41:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.625 17:41:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:00.625 17:41:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.625 17:41:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:00.625 17:41:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.625 [2024-07-12 17:41:39.481444] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:00.625 [2024-07-12 17:41:39.481499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.625 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.625 [2024-07-12 17:41:39.568211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.884 [2024-07-12 17:41:39.609628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:00.884 [2024-07-12 17:41:39.609767] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.884 [2024-07-12 17:41:39.609778] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.885 [2024-07-12 17:41:39.609788] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.885 [2024-07-12 17:41:39.609813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.885 17:41:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:00.885 17:41:39 -- common/autotest_common.sh@852 -- # return 0 00:31:00.885 17:41:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:00.885 17:41:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:00.885 17:41:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.885 17:41:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.885 17:41:39 -- host/digest.sh@120 -- # common_target_config 00:31:00.885 17:41:39 -- host/digest.sh@43 -- # rpc_cmd 00:31:00.885 17:41:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.885 17:41:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.885 null0 00:31:00.885 [2024-07-12 17:41:39.800086] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.885 [2024-07-12 17:41:39.824294] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.885 17:41:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.885 17:41:39 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:31:00.885 17:41:39 -- host/digest.sh@77 -- # local rw bs qd 00:31:00.885 17:41:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:00.885 17:41:39 -- host/digest.sh@80 -- # rw=randread 00:31:00.885 17:41:39 -- host/digest.sh@80 -- # bs=4096 00:31:00.885 17:41:39 -- host/digest.sh@80 -- # qd=128 00:31:00.885 17:41:39 -- host/digest.sh@82 -- # bperfpid=108180 00:31:00.885 17:41:39 -- host/digest.sh@83 -- # waitforlisten 108180 /var/tmp/bperf.sock 00:31:00.885 17:41:39 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:00.885 17:41:39 -- common/autotest_common.sh@819 -- # '[' -z 108180 ']' 00:31:00.885 17:41:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:00.885 17:41:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:00.885 17:41:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:00.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:00.885 17:41:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:00.885 17:41:39 -- common/autotest_common.sh@10 -- # set +x 00:31:01.143 [2024-07-12 17:41:39.875497] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:01.143 [2024-07-12 17:41:39.875553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108180 ] 00:31:01.143 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.143 [2024-07-12 17:41:39.947047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.143 [2024-07-12 17:41:39.988044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.709 17:41:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:01.709 17:41:40 -- common/autotest_common.sh@852 -- # return 0 00:31:01.709 17:41:40 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:01.709 17:41:40 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:01.709 17:41:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:02.276 17:41:40 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.276 17:41:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.276 nvme0n1 00:31:02.534 17:41:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:02.534 17:41:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:02.534 Running I/O for 2 seconds... 00:31:05.060 00:31:05.060 Latency(us) 00:31:05.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.060 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:05.060 nvme0n1 : 2.05 14213.41 55.52 0.00 0.00 8823.26 3634.27 49330.73 00:31:05.060 =================================================================================================================== 00:31:05.060 Total : 14213.41 55.52 0.00 0.00 8823.26 3634.27 49330.73 00:31:05.060 0 00:31:05.060 17:41:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:05.060 17:41:43 -- host/digest.sh@92 -- # get_accel_stats 00:31:05.060 17:41:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:05.060 17:41:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:05.060 | select(.opcode=="crc32c") 00:31:05.060 | "\(.module_name) \(.executed)"' 00:31:05.060 17:41:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:05.060 17:41:43 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:05.060 17:41:43 -- host/digest.sh@93 -- # exp_module=software 00:31:05.060 17:41:43 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:05.060 17:41:43 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:05.060 17:41:43 -- host/digest.sh@97 -- # killprocess 108180 00:31:05.060 17:41:43 -- common/autotest_common.sh@926 -- # '[' -z 108180 ']' 00:31:05.060 17:41:43 -- common/autotest_common.sh@930 -- # kill -0 108180 00:31:05.060 17:41:43 -- common/autotest_common.sh@931 -- # uname 00:31:05.060 17:41:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:05.060 17:41:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108180 00:31:05.060 17:41:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:05.061 17:41:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:05.061 17:41:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108180' 00:31:05.061 killing process with pid 108180 00:31:05.061 17:41:43 -- common/autotest_common.sh@945 -- # kill 108180 00:31:05.061 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.061 00:31:05.061 Latency(us) 00:31:05.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.061 =================================================================================================================== 00:31:05.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.061 17:41:43 -- common/autotest_common.sh@950 -- # wait 108180 00:31:05.061 17:41:43 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:31:05.061 17:41:43 -- host/digest.sh@77 -- # local rw bs qd 00:31:05.061 17:41:43 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:05.061 17:41:43 -- host/digest.sh@80 -- # rw=randread 00:31:05.061 17:41:43 -- host/digest.sh@80 -- # bs=131072 00:31:05.061 17:41:43 -- host/digest.sh@80 -- # qd=16 00:31:05.061 17:41:43 -- host/digest.sh@82 -- # bperfpid=108845 00:31:05.061 17:41:43 -- host/digest.sh@83 -- # waitforlisten 108845 /var/tmp/bperf.sock 00:31:05.061 17:41:43 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:05.061 17:41:43 -- common/autotest_common.sh@819 -- # '[' -z 108845 ']' 00:31:05.061 17:41:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:05.061 17:41:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:05.061 17:41:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:05.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:05.061 17:41:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:05.061 17:41:43 -- common/autotest_common.sh@10 -- # set +x 00:31:05.061 [2024-07-12 17:41:43.935968] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:05.061 [2024-07-12 17:41:43.936025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108845 ] 00:31:05.061 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:05.061 Zero copy mechanism will not be used. 00:31:05.061 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.061 [2024-07-12 17:41:44.007033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.319 [2024-07-12 17:41:44.049015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.319 17:41:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:05.319 17:41:44 -- common/autotest_common.sh@852 -- # return 0 00:31:05.319 17:41:44 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:05.319 17:41:44 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:05.319 17:41:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:05.577 17:41:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:05.577 17:41:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:05.833 nvme0n1 00:31:06.091 17:41:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:06.091 17:41:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:06.091 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:06.091 Zero copy mechanism will not be used. 00:31:06.091 Running I/O for 2 seconds... 00:31:07.989 00:31:07.989 Latency(us) 00:31:07.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.990 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:07.990 nvme0n1 : 2.00 4445.87 555.73 0.00 0.00 3595.12 1050.07 8877.15 00:31:07.990 =================================================================================================================== 00:31:07.990 Total : 4445.87 555.73 0.00 0.00 3595.12 1050.07 8877.15 00:31:07.990 0 00:31:07.990 17:41:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:07.990 17:41:46 -- host/digest.sh@92 -- # get_accel_stats 00:31:07.990 17:41:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:07.990 17:41:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:07.990 | select(.opcode=="crc32c") 00:31:07.990 | "\(.module_name) \(.executed)"' 00:31:07.990 17:41:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:08.247 17:41:47 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:08.247 17:41:47 -- host/digest.sh@93 -- # exp_module=software 00:31:08.247 17:41:47 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:08.247 17:41:47 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:08.247 17:41:47 -- host/digest.sh@97 -- # killprocess 108845 00:31:08.247 17:41:47 -- common/autotest_common.sh@926 -- # '[' -z 108845 ']' 00:31:08.247 17:41:47 -- common/autotest_common.sh@930 -- # kill -0 108845 00:31:08.247 17:41:47 -- common/autotest_common.sh@931 -- # uname 00:31:08.247 17:41:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:08.247 17:41:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108845 00:31:08.505 17:41:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:08.505 17:41:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:08.505 17:41:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108845' 00:31:08.505 killing process with pid 108845 00:31:08.505 17:41:47 -- common/autotest_common.sh@945 -- # kill 108845 00:31:08.505 Received shutdown signal, test time was about 2.000000 seconds 00:31:08.506 00:31:08.506 Latency(us) 00:31:08.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.506 =================================================================================================================== 00:31:08.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:08.506 17:41:47 -- common/autotest_common.sh@950 -- # wait 108845 00:31:08.506 17:41:47 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:31:08.506 17:41:47 -- host/digest.sh@77 -- # local rw bs qd 00:31:08.506 17:41:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:08.506 17:41:47 -- host/digest.sh@80 -- # rw=randwrite 00:31:08.506 17:41:47 -- host/digest.sh@80 -- # bs=4096 00:31:08.506 17:41:47 -- host/digest.sh@80 -- # qd=128 00:31:08.506 17:41:47 -- host/digest.sh@82 -- # bperfpid=109530 00:31:08.506 17:41:47 -- host/digest.sh@83 -- # waitforlisten 109530 /var/tmp/bperf.sock 00:31:08.506 17:41:47 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:08.506 17:41:47 -- common/autotest_common.sh@819 -- # '[' -z 109530 ']' 00:31:08.506 17:41:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:08.506 17:41:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:08.506 17:41:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:08.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:08.506 17:41:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:08.506 17:41:47 -- common/autotest_common.sh@10 -- # set +x 00:31:08.506 [2024-07-12 17:41:47.451439] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:08.506 [2024-07-12 17:41:47.451503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109530 ] 00:31:08.764 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.764 [2024-07-12 17:41:47.522659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.764 [2024-07-12 17:41:47.564678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.764 17:41:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:08.764 17:41:47 -- common/autotest_common.sh@852 -- # return 0 00:31:08.764 17:41:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:08.764 17:41:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:08.764 17:41:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:09.021 17:41:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:09.022 17:41:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:09.588 nvme0n1 00:31:09.588 17:41:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:09.588 17:41:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:09.588 Running I/O for 2 seconds... 00:31:12.122 00:31:12.122 Latency(us) 00:31:12.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.122 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:12.122 nvme0n1 : 2.00 19104.88 74.63 0.00 0.00 6692.89 2993.80 11439.01 00:31:12.122 =================================================================================================================== 00:31:12.122 Total : 19104.88 74.63 0.00 0.00 6692.89 2993.80 11439.01 00:31:12.122 0 00:31:12.122 17:41:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:12.122 17:41:50 -- host/digest.sh@92 -- # get_accel_stats 00:31:12.122 17:41:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:12.122 17:41:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:12.122 | select(.opcode=="crc32c") 00:31:12.122 | "\(.module_name) \(.executed)"' 00:31:12.122 17:41:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:12.122 17:41:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:12.122 17:41:50 -- host/digest.sh@93 -- # exp_module=software 00:31:12.122 17:41:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:12.122 17:41:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:12.122 17:41:50 -- host/digest.sh@97 -- # killprocess 109530 00:31:12.122 17:41:50 -- common/autotest_common.sh@926 -- # '[' -z 109530 ']' 00:31:12.122 17:41:50 -- common/autotest_common.sh@930 -- # kill -0 109530 00:31:12.122 17:41:50 -- common/autotest_common.sh@931 -- # uname 00:31:12.122 17:41:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:12.122 17:41:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109530 00:31:12.122 17:41:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:12.122 17:41:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:12.122 17:41:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109530' 00:31:12.122 killing process with pid 109530 00:31:12.122 17:41:50 -- common/autotest_common.sh@945 -- # kill 109530 00:31:12.122 Received shutdown signal, test time was about 2.000000 seconds 00:31:12.122 00:31:12.122 Latency(us) 00:31:12.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.122 =================================================================================================================== 00:31:12.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.122 17:41:50 -- common/autotest_common.sh@950 -- # wait 109530 00:31:12.122 17:41:51 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:31:12.122 17:41:51 -- host/digest.sh@77 -- # local rw bs qd 00:31:12.122 17:41:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:12.122 17:41:51 -- host/digest.sh@80 -- # rw=randwrite 00:31:12.122 17:41:51 -- host/digest.sh@80 -- # bs=131072 00:31:12.122 17:41:51 -- host/digest.sh@80 -- # qd=16 00:31:12.122 17:41:51 -- host/digest.sh@82 -- # bperfpid=110183 00:31:12.122 17:41:51 -- host/digest.sh@83 -- # waitforlisten 110183 /var/tmp/bperf.sock 00:31:12.122 17:41:51 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:12.122 17:41:51 -- common/autotest_common.sh@819 -- # '[' -z 110183 ']' 00:31:12.122 17:41:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:12.122 17:41:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:12.122 17:41:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:12.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:12.122 17:41:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:12.122 17:41:51 -- common/autotest_common.sh@10 -- # set +x 00:31:12.122 [2024-07-12 17:41:51.068169] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:12.122 [2024-07-12 17:41:51.068231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110183 ] 00:31:12.122 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:12.122 Zero copy mechanism will not be used. 00:31:12.381 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.381 [2024-07-12 17:41:51.140944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.381 [2024-07-12 17:41:51.179791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.381 17:41:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:12.381 17:41:51 -- common/autotest_common.sh@852 -- # return 0 00:31:12.381 17:41:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:12.381 17:41:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:12.381 17:41:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:12.639 17:41:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:12.639 17:41:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:13.205 nvme0n1 00:31:13.205 17:41:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:13.205 17:41:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:13.205 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:13.205 Zero copy mechanism will not be used. 00:31:13.205 Running I/O for 2 seconds... 00:31:15.104 00:31:15.104 Latency(us) 00:31:15.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.104 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:15.104 nvme0n1 : 2.00 6169.14 771.14 0.00 0.00 2588.34 2189.50 10545.34 00:31:15.104 =================================================================================================================== 00:31:15.104 Total : 6169.14 771.14 0.00 0.00 2588.34 2189.50 10545.34 00:31:15.104 0 00:31:15.104 17:41:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:15.104 17:41:54 -- host/digest.sh@92 -- # get_accel_stats 00:31:15.104 17:41:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:15.104 17:41:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:15.104 17:41:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:15.104 | select(.opcode=="crc32c") 00:31:15.104 | "\(.module_name) \(.executed)"' 00:31:15.362 17:41:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:15.362 17:41:54 -- host/digest.sh@93 -- # exp_module=software 00:31:15.362 17:41:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:15.362 17:41:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:15.362 17:41:54 -- host/digest.sh@97 -- # killprocess 110183 00:31:15.362 17:41:54 -- common/autotest_common.sh@926 -- # '[' -z 110183 ']' 00:31:15.362 17:41:54 -- common/autotest_common.sh@930 -- # kill -0 110183 00:31:15.362 17:41:54 -- common/autotest_common.sh@931 -- # uname 00:31:15.362 17:41:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:15.362 17:41:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110183 00:31:15.362 17:41:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:15.362 17:41:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:15.362 17:41:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110183' 00:31:15.362 killing process with pid 110183 00:31:15.362 17:41:54 -- common/autotest_common.sh@945 -- # kill 110183 00:31:15.362 Received shutdown signal, test time was about 2.000000 seconds 00:31:15.362 00:31:15.362 Latency(us) 00:31:15.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.362 =================================================================================================================== 00:31:15.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:15.362 17:41:54 -- common/autotest_common.sh@950 -- # wait 110183 00:31:15.622 17:41:54 -- host/digest.sh@126 -- # killprocess 108019 00:31:15.622 17:41:54 -- common/autotest_common.sh@926 -- # '[' -z 108019 ']' 00:31:15.622 17:41:54 -- common/autotest_common.sh@930 -- # kill -0 108019 00:31:15.622 17:41:54 -- common/autotest_common.sh@931 -- # uname 00:31:15.622 17:41:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:15.622 17:41:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108019 00:31:15.622 17:41:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:15.622 17:41:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:15.622 17:41:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108019' 00:31:15.622 killing process with pid 108019 00:31:15.622 17:41:54 -- common/autotest_common.sh@945 -- # kill 108019 00:31:15.622 17:41:54 -- common/autotest_common.sh@950 -- # wait 108019 00:31:15.881 00:31:15.881 real 0m15.311s 00:31:15.881 user 0m30.361s 00:31:15.881 sys 0m4.367s 00:31:15.881 17:41:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:15.881 17:41:54 -- common/autotest_common.sh@10 -- # set +x 00:31:15.881 ************************************ 00:31:15.881 END TEST nvmf_digest_clean 00:31:15.881 ************************************ 00:31:15.881 17:41:54 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:31:15.881 17:41:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:15.881 17:41:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:15.881 17:41:54 -- common/autotest_common.sh@10 -- # set +x 00:31:15.881 ************************************ 00:31:15.881 START TEST nvmf_digest_error 00:31:15.881 ************************************ 00:31:15.881 17:41:54 -- common/autotest_common.sh@1104 -- # run_digest_error 00:31:15.881 17:41:54 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:31:15.881 17:41:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:15.881 17:41:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:15.881 17:41:54 -- common/autotest_common.sh@10 -- # set +x 00:31:15.881 17:41:54 -- nvmf/common.sh@469 -- # nvmfpid=110791 00:31:15.881 17:41:54 -- nvmf/common.sh@470 -- # waitforlisten 110791 00:31:15.881 17:41:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:15.881 17:41:54 -- common/autotest_common.sh@819 -- # '[' -z 110791 ']' 00:31:15.881 17:41:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.881 17:41:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:15.881 17:41:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.881 17:41:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:15.881 17:41:54 -- common/autotest_common.sh@10 -- # set +x 00:31:15.881 [2024-07-12 17:41:54.834806] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:15.881 [2024-07-12 17:41:54.834867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.167 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.167 [2024-07-12 17:41:54.922421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.167 [2024-07-12 17:41:54.962755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:16.167 [2024-07-12 17:41:54.962903] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.167 [2024-07-12 17:41:54.962915] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.167 [2024-07-12 17:41:54.962925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.167 [2024-07-12 17:41:54.962952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.732 17:41:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:16.732 17:41:55 -- common/autotest_common.sh@852 -- # return 0 00:31:16.732 17:41:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:16.732 17:41:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:16.732 17:41:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.990 17:41:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.990 17:41:55 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:16.990 17:41:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.991 17:41:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.991 [2024-07-12 17:41:55.717229] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:16.991 17:41:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.991 17:41:55 -- host/digest.sh@104 -- # common_target_config 00:31:16.991 17:41:55 -- host/digest.sh@43 -- # rpc_cmd 00:31:16.991 17:41:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.991 17:41:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.991 null0 00:31:16.991 [2024-07-12 17:41:55.809862] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.991 [2024-07-12 17:41:55.834055] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.991 17:41:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.991 17:41:55 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:31:16.991 17:41:55 -- host/digest.sh@54 -- # local rw bs qd 00:31:16.991 17:41:55 -- host/digest.sh@56 -- # rw=randread 00:31:16.991 17:41:55 -- host/digest.sh@56 -- # bs=4096 00:31:16.991 17:41:55 -- host/digest.sh@56 -- # qd=128 00:31:16.991 17:41:55 -- host/digest.sh@58 -- # bperfpid=111040 00:31:16.991 17:41:55 -- host/digest.sh@60 -- # waitforlisten 111040 /var/tmp/bperf.sock 00:31:16.991 17:41:55 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:16.991 17:41:55 -- common/autotest_common.sh@819 -- # '[' -z 111040 ']' 00:31:16.991 17:41:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:16.991 17:41:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:16.991 17:41:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:16.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:16.991 17:41:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:16.991 17:41:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.991 [2024-07-12 17:41:55.887155] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:16.991 [2024-07-12 17:41:55.887212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111040 ] 00:31:16.991 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.991 [2024-07-12 17:41:55.957681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.249 [2024-07-12 17:41:55.999388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.182 17:41:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:18.182 17:41:56 -- common/autotest_common.sh@852 -- # return 0 00:31:18.182 17:41:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:18.182 17:41:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:18.182 17:41:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:18.182 17:41:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.182 17:41:56 -- common/autotest_common.sh@10 -- # set +x 00:31:18.182 17:41:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.182 17:41:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:18.182 17:41:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:18.749 nvme0n1 00:31:18.749 17:41:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:18.749 17:41:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.749 17:41:57 -- common/autotest_common.sh@10 -- # set +x 00:31:18.749 17:41:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.749 17:41:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:18.749 17:41:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:18.749 Running I/O for 2 seconds... 00:31:18.749 [2024-07-12 17:41:57.558972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.559014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.559029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.571984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.572017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.572030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.589909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.589937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.589950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.608576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.608604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.608617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.627352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.627380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.627392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.645807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.645836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.645847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.664718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.664746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.664759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.676709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.676736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.676748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.694095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.694122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.694135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.749 [2024-07-12 17:41:57.713111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:18.749 [2024-07-12 17:41:57.713141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.749 [2024-07-12 17:41:57.713153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.008 [2024-07-12 17:41:57.732207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.008 [2024-07-12 17:41:57.732235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.008 [2024-07-12 17:41:57.732249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.008 [2024-07-12 17:41:57.744661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.008 [2024-07-12 17:41:57.744688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.008 [2024-07-12 17:41:57.744701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.008 [2024-07-12 17:41:57.762419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.008 [2024-07-12 17:41:57.762447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.008 [2024-07-12 17:41:57.762459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.008 [2024-07-12 17:41:57.781168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.008 [2024-07-12 17:41:57.781196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.008 [2024-07-12 17:41:57.781213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.008 [2024-07-12 17:41:57.799947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.008 [2024-07-12 17:41:57.799975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.008 [2024-07-12 17:41:57.799987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.008 [2024-07-12 17:41:57.818562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.008 [2024-07-12 17:41:57.818590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.818602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.009 [2024-07-12 17:41:57.837422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.009 [2024-07-12 17:41:57.837450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.837462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.009 [2024-07-12 17:41:57.856146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.009 [2024-07-12 17:41:57.856174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.856186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.009 [2024-07-12 17:41:57.875122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.009 [2024-07-12 17:41:57.875151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.875163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.009 [2024-07-12 17:41:57.887244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.009 [2024-07-12 17:41:57.887278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.887291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.009 [2024-07-12 17:41:57.912608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.009 [2024-07-12 17:41:57.912636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.912648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.009 [2024-07-12 17:41:57.931158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.009 [2024-07-12 17:41:57.931185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.931197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.009 [2024-07-12 17:41:57.949764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.009 [2024-07-12 17:41:57.949796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.949808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.009 [2024-07-12 17:41:57.962090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.009 [2024-07-12 17:41:57.962117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.009 [2024-07-12 17:41:57.962129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.267 [2024-07-12 17:41:57.979460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.267 [2024-07-12 17:41:57.979486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.267 [2024-07-12 17:41:57.979498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.267 [2024-07-12 17:41:57.998506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.267 [2024-07-12 17:41:57.998535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.267 [2024-07-12 17:41:57.998547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.267 [2024-07-12 17:41:58.015379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.267 [2024-07-12 17:41:58.015408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.267 [2024-07-12 17:41:58.015420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.267 [2024-07-12 17:41:58.028074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.267 [2024-07-12 17:41:58.028101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.028113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.046648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.046676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.046688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.066535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.066562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.066574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.084098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.084126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.084137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.102722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.102750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.102762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.121099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.121126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.121138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.140240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.140275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.140288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.158898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.158926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.158938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.177496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.177524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.177536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.196028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.196054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.196066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.214112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.214141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.214153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.268 [2024-07-12 17:41:58.230478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.268 [2024-07-12 17:41:58.230505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.268 [2024-07-12 17:41:58.230518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.526 [2024-07-12 17:41:58.243634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.526 [2024-07-12 17:41:58.243666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.526 [2024-07-12 17:41:58.243678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.526 [2024-07-12 17:41:58.262253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.526 [2024-07-12 17:41:58.262286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.526 [2024-07-12 17:41:58.262298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.526 [2024-07-12 17:41:58.280791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.526 [2024-07-12 17:41:58.280818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.526 [2024-07-12 17:41:58.280830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.526 [2024-07-12 17:41:58.299218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.526 [2024-07-12 17:41:58.299245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.526 [2024-07-12 17:41:58.299263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.526 [2024-07-12 17:41:58.317855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.526 [2024-07-12 17:41:58.317883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.526 [2024-07-12 17:41:58.317894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.526 [2024-07-12 17:41:58.336396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.526 [2024-07-12 17:41:58.336423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.336435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.354923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.354950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.354962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.366922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.366948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.366960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.383974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.384001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.384013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.402535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.402562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.402574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.419355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.419381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.419393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.432227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.432260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.432272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.450604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.450632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.450644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.469325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.469352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.469363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.527 [2024-07-12 17:41:58.487975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.527 [2024-07-12 17:41:58.488002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.527 [2024-07-12 17:41:58.488014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.506566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.506593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.506605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.525109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.525135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.525147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.543633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.543659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.543679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.562428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.562456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.562468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.581299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.581327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.581339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.599863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.599889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.599902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.618439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.618465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.618477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.630398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.630423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.630435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.647881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.647908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.647920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.665943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.665970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.665982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.684180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.684206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.684218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.703328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.703359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.703372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.721881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.721909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.721922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.738804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.738831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.738842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.786 [2024-07-12 17:41:58.751469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:19.786 [2024-07-12 17:41:58.751496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.786 [2024-07-12 17:41:58.751508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.769956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.769983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.769995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.788777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.788804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.788816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.806981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.807008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.807020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.825652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.825678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.825690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.844350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.844378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.844390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.856104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.856131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.856142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.874003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.874031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.874043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.892876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.892903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.892915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.911620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.911647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.911660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.930497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.930524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.930536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.949481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.949509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.949521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.967961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.967988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.968000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:58.986690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.044 [2024-07-12 17:41:58.986717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.044 [2024-07-12 17:41:58.986729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.044 [2024-07-12 17:41:59.005226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.045 [2024-07-12 17:41:59.005260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.045 [2024-07-12 17:41:59.005277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.023750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.023777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.023790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.042208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.042235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.042247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.053863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.053889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.053900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.071714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.071741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.071752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.090194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.090220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.090232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.108840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.108866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.108878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.127400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.127428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.127439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.145706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.145733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.145744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.164646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.164674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.164685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.183273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.183300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.183311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.201919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.201947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.201959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.213969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.213996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.214009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.231648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.231676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.231688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.250072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.250101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.250112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.301 [2024-07-12 17:41:59.268751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.301 [2024-07-12 17:41:59.268779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.301 [2024-07-12 17:41:59.268791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.287333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.287361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.287373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.306363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.306391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.306407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.324978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.325005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.325017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.343275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.343303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.343315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.361767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.361794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.361806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.380449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.380477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.380489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.399048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.399076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.399088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.417825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.417853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.417866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.436851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.436878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.436890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.454179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.454206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.454218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.467041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.467072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.467085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.485513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.485541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.485552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.504162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.504190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.504202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.557 [2024-07-12 17:41:59.522761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.557 [2024-07-12 17:41:59.522788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.557 [2024-07-12 17:41:59.522800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.815 [2024-07-12 17:41:59.541386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5b0c0) 00:31:20.815 [2024-07-12 17:41:59.541413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.815 [2024-07-12 17:41:59.541425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:20.815 00:31:20.815 Latency(us) 00:31:20.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.815 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:20.815 nvme0n1 : 2.01 14397.36 56.24 0.00 0.00 8883.00 4021.53 35985.22 00:31:20.815 =================================================================================================================== 00:31:20.815 Total : 14397.36 56.24 0.00 0.00 8883.00 4021.53 35985.22 00:31:20.815 0 00:31:20.815 17:41:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:20.815 17:41:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:20.815 17:41:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:20.815 | .driver_specific 00:31:20.815 | .nvme_error 00:31:20.815 | .status_code 00:31:20.815 | .command_transient_transport_error' 00:31:20.815 17:41:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:21.378 17:42:00 -- host/digest.sh@71 -- # (( 113 > 0 )) 00:31:21.378 17:42:00 -- host/digest.sh@73 -- # killprocess 111040 00:31:21.378 17:42:00 -- common/autotest_common.sh@926 -- # '[' -z 111040 ']' 00:31:21.378 17:42:00 -- common/autotest_common.sh@930 -- # kill -0 111040 00:31:21.379 17:42:00 -- common/autotest_common.sh@931 -- # uname 00:31:21.379 17:42:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:21.379 17:42:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111040 00:31:21.379 17:42:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:21.379 17:42:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:21.379 17:42:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111040' 00:31:21.379 killing process with pid 111040 00:31:21.379 17:42:00 -- common/autotest_common.sh@945 -- # kill 111040 00:31:21.379 Received shutdown signal, test time was about 2.000000 seconds 00:31:21.379 00:31:21.379 Latency(us) 00:31:21.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.379 =================================================================================================================== 00:31:21.379 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:21.379 17:42:00 -- common/autotest_common.sh@950 -- # wait 111040 00:31:21.379 17:42:00 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:31:21.379 17:42:00 -- host/digest.sh@54 -- # local rw bs qd 00:31:21.379 17:42:00 -- host/digest.sh@56 -- # rw=randread 00:31:21.379 17:42:00 -- host/digest.sh@56 -- # bs=131072 00:31:21.379 17:42:00 -- host/digest.sh@56 -- # qd=16 00:31:21.379 17:42:00 -- host/digest.sh@58 -- # bperfpid=111878 00:31:21.379 17:42:00 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:21.379 17:42:00 -- host/digest.sh@60 -- # waitforlisten 111878 /var/tmp/bperf.sock 00:31:21.379 17:42:00 -- common/autotest_common.sh@819 -- # '[' -z 111878 ']' 00:31:21.379 17:42:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:21.379 17:42:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:21.379 17:42:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:21.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:21.379 17:42:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:21.379 17:42:00 -- common/autotest_common.sh@10 -- # set +x 00:31:21.379 [2024-07-12 17:42:00.341558] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:21.379 [2024-07-12 17:42:00.341619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111878 ] 00:31:21.379 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:21.379 Zero copy mechanism will not be used. 00:31:21.637 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.637 [2024-07-12 17:42:00.413401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.637 [2024-07-12 17:42:00.455625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.571 17:42:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:22.571 17:42:01 -- common/autotest_common.sh@852 -- # return 0 00:31:22.571 17:42:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:22.571 17:42:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:22.571 17:42:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:22.571 17:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.571 17:42:01 -- common/autotest_common.sh@10 -- # set +x 00:31:22.571 17:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.571 17:42:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:22.571 17:42:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:22.830 nvme0n1 00:31:22.830 17:42:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:22.830 17:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.830 17:42:01 -- common/autotest_common.sh@10 -- # set +x 00:31:22.830 17:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.830 17:42:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:22.830 17:42:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:22.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:22.830 Zero copy mechanism will not be used. 00:31:22.830 Running I/O for 2 seconds... 00:31:22.830 [2024-07-12 17:42:01.743183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:22.830 [2024-07-12 17:42:01.743224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.830 [2024-07-12 17:42:01.743239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.830 [2024-07-12 17:42:01.750721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:22.830 [2024-07-12 17:42:01.750752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.830 [2024-07-12 17:42:01.750766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.830 [2024-07-12 17:42:01.758348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:22.830 [2024-07-12 17:42:01.758376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.830 [2024-07-12 17:42:01.758388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.830 [2024-07-12 17:42:01.765480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:22.830 [2024-07-12 17:42:01.765507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.830 [2024-07-12 17:42:01.765519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.830 [2024-07-12 17:42:01.772832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:22.830 [2024-07-12 17:42:01.772859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.830 [2024-07-12 17:42:01.772872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:22.830 [2024-07-12 17:42:01.778951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:22.830 [2024-07-12 17:42:01.778979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.830 [2024-07-12 17:42:01.778991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:22.830 [2024-07-12 17:42:01.785854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:22.830 [2024-07-12 17:42:01.785882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.830 [2024-07-12 17:42:01.785894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:22.830 [2024-07-12 17:42:01.792725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:22.830 [2024-07-12 17:42:01.792753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.830 [2024-07-12 17:42:01.792766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.088 [2024-07-12 17:42:01.799715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.088 [2024-07-12 17:42:01.799742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.088 [2024-07-12 17:42:01.799759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.806892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.806919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.806931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.814097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.814125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.814138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.821385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.821412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.821424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.828635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.828661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.828673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.835868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.835896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.835908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.842764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.842792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.842803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.849417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.849443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.849455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.856429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.856456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.856468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.863212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.863249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.863270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.870142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.870168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.870180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.877082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.877110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.877123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.884209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.884237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.884249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.891263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.891291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.891303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.898469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.898494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.898507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.905601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.905629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.905642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.912874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.912901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.912913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.919640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.919667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.919679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.926718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.926744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.926756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.933962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.933990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.934001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.941204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.941232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.941244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.948419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.948447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.948459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.956166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.956194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.956207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.963378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.963406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.963418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.970471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.970500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.970512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.977609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.977636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.977649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.984783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.984810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.984827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.991926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.991954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.991967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:01.999177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:01.999203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.089 [2024-07-12 17:42:01.999215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.089 [2024-07-12 17:42:02.006291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.089 [2024-07-12 17:42:02.006319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.090 [2024-07-12 17:42:02.006330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.090 [2024-07-12 17:42:02.013429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.090 [2024-07-12 17:42:02.013456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.090 [2024-07-12 17:42:02.013468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.090 [2024-07-12 17:42:02.020559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.090 [2024-07-12 17:42:02.020585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.090 [2024-07-12 17:42:02.020597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.090 [2024-07-12 17:42:02.027919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.090 [2024-07-12 17:42:02.027947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.090 [2024-07-12 17:42:02.027959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.090 [2024-07-12 17:42:02.036006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.090 [2024-07-12 17:42:02.036035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.090 [2024-07-12 17:42:02.036047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.090 [2024-07-12 17:42:02.043970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.090 [2024-07-12 17:42:02.043999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.090 [2024-07-12 17:42:02.044011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.090 [2024-07-12 17:42:02.051623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.090 [2024-07-12 17:42:02.051651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.090 [2024-07-12 17:42:02.051664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.348 [2024-07-12 17:42:02.059379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.348 [2024-07-12 17:42:02.059407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-07-12 17:42:02.059419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.348 [2024-07-12 17:42:02.066772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.348 [2024-07-12 17:42:02.066800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-07-12 17:42:02.066812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.348 [2024-07-12 17:42:02.074020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.348 [2024-07-12 17:42:02.074049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-07-12 17:42:02.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.348 [2024-07-12 17:42:02.081578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.348 [2024-07-12 17:42:02.081607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-07-12 17:42:02.081619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.348 [2024-07-12 17:42:02.088749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.348 [2024-07-12 17:42:02.088777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-07-12 17:42:02.088789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.348 [2024-07-12 17:42:02.096367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.348 [2024-07-12 17:42:02.096403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-07-12 17:42:02.096415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.348 [2024-07-12 17:42:02.104155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.348 [2024-07-12 17:42:02.104184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-07-12 17:42:02.104196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.348 [2024-07-12 17:42:02.111573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.348 [2024-07-12 17:42:02.111601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-07-12 17:42:02.111617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.118749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.118776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.118788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.126048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.126076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.126088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.133463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.133492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.133504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.141368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.141395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.141406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.148909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.148936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.148948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.156549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.156578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.156590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.163960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.163988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.164000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.171483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.171511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.171523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.178996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.179029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.179041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.186209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.186237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.186249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.193716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.193743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.193755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.201476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.201503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.201515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.209281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.209308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.209320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.216925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.216953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.216965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.224316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.224344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.224355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.231546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.231574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.231586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.238761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.238789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.238800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.246196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.246224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.246236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.253582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.253611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.253623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.260661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.260690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.260702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.267526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.267555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.267567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.274767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.274795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.274808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.281962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.281990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.282002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.290001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.290030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.290043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.299004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.299034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.299047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.307053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.307081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.307097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.349 [2024-07-12 17:42:02.314744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.349 [2024-07-12 17:42:02.314771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-07-12 17:42:02.314783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.322892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.322920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.322932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.331734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.331763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.331775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.339902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.339931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.339943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.347617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.347645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.347658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.355385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.355412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.355425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.362864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.362892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.362904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.370275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.370302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.370314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.377946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.377980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.377992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.385708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.385736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.385749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.393263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.393290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.393304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.400885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.400913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.400925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.408790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.408818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.408830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.416555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.416583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.416595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.424025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.424051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.424063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.431243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.431279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.431291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.438944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.438972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.438984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.446695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.446723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.446734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.454194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.454222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.454234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.462077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.462105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.462117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.469214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.469241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.469253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.476476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.476502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.476514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.483508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.483536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.483548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.490629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.608 [2024-07-12 17:42:02.490657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.608 [2024-07-12 17:42:02.490669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.608 [2024-07-12 17:42:02.497963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.497991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.498002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.505609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.505639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.505655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.512956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.512985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.512998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.520007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.520034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.520046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.527305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.527333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.527345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.534567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.534596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.534609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.542152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.542180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.542192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.549769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.549797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.549809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.557263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.557291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.557303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.564651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.564679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.564691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.609 [2024-07-12 17:42:02.572060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.609 [2024-07-12 17:42:02.572093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.609 [2024-07-12 17:42:02.572105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.579705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.579733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.579745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.587011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.587038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.587050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.594921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.594949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.594961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.602426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.602454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.602466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.609987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.610014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.610026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.617789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.617817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.617829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.625354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.625382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.625395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.632668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.632696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.632709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.639863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.639891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.639903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.647342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.647370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.647382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.654972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.654999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.655011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.662499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.662527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.662538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.669890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.669918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.868 [2024-07-12 17:42:02.669930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.868 [2024-07-12 17:42:02.677137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.868 [2024-07-12 17:42:02.677166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.677178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.684510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.684538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.684550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.691901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.691929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.691941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.699435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.699462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.699478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.706810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.706839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.706851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.714120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.714148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.714160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.721390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.721418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.721430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.728683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.728711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.728724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.736061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.736089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.736102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.743599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.743625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.743637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.750996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.751024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.751035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.758263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.758290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.758302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.765366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.765394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.765406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.772643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.772670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.772682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.780178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.780205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.780217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.787787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.787815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.787827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.795357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.795384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.795396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.802907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.802935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.802947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.810299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.810326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.810338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.817705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.817732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.817744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.825288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.825315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:23.869 [2024-07-12 17:42:02.833436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:23.869 [2024-07-12 17:42:02.833464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.869 [2024-07-12 17:42:02.833476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.841136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.841164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.841175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.848503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.848531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.848543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.856099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.856126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.856137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.863573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.863601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.863612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.871462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.871490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.871501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.879632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.879660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.879671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.887962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.887990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.888001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.895857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.895889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.895900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.904292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.904320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.904331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.913464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.913494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.913507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.922537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.922565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.922577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.930268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.930296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.930308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.938197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.938226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.938238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.945604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.945632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.945644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.953122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.953150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.953162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.960854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.960882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.960894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.968640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.968668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.968681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.976179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.976208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.976220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.983929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.983957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.983969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.991815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.991843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.991856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:02.999317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:02.999344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:02.999356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:03.006840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:03.006867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.129 [2024-07-12 17:42:03.006880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.129 [2024-07-12 17:42:03.014201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.129 [2024-07-12 17:42:03.014228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.014240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.021918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.021945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.021957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.029566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.029594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.029609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.037130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.037158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.037169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.044454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.044482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.044493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.052025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.052052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.052064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.059683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.059710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.059722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.067122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.067149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.067161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.074348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.074374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.074386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.081576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.081603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.081614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.130 [2024-07-12 17:42:03.088775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.130 [2024-07-12 17:42:03.088801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.130 [2024-07-12 17:42:03.088813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.389 [2024-07-12 17:42:03.096106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.389 [2024-07-12 17:42:03.096138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.389 [2024-07-12 17:42:03.096151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.389 [2024-07-12 17:42:03.103744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.389 [2024-07-12 17:42:03.103771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.389 [2024-07-12 17:42:03.103782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.389 [2024-07-12 17:42:03.110318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.389 [2024-07-12 17:42:03.110346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.389 [2024-07-12 17:42:03.110358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.117163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.117191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.117203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.123865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.123893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.123905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.131164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.131192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.131204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.138543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.138570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.138582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.144071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.144099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.144111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.149336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.149363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.149375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.154565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.154593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.154605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.159452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.159480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.159492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.164292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.164320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.164332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.169024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.169052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.169064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.174668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.174695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.174707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.180434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.180461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.180473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.186590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.186617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.186629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.192970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.192997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.193009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.199481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.199507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.199523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.206116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.206143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.206154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.212547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.212575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.212586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.219053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.219081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.219092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.225768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.225795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.225806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.232947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.232976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.232987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.239226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.239263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.239277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.246198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.246226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.246239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.253952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.253980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.253992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.261575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.261608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.261620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.269837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.269865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.269878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.277987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.278015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.278027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.285981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.286008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.286020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.390 [2024-07-12 17:42:03.293670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.390 [2024-07-12 17:42:03.293697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.390 [2024-07-12 17:42:03.293709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.300939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.300968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.300981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.308376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.308403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.308415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.315993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.316021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.316033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.323326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.323354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.323366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.331435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.331463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.331476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.338818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.338847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.338859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.345022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.345051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.345063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.350201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.350228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.350240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.391 [2024-07-12 17:42:03.355132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.391 [2024-07-12 17:42:03.355160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.391 [2024-07-12 17:42:03.355172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.360470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.360498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.360511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.367059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.367088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.367100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.374024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.374052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.374065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.380847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.380875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.380892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.387747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.387775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.387787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.394673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.394701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.394713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.401637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.401665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.401677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.408489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.408516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.408528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.414835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.414863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.414875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.421595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.421622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.421633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.428554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.428582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.428594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.435356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.435384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.435395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.442192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.442219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.442231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.448899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.448927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.448939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.455862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.455890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.455902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.462859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.462887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.462899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.469797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.469824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.469836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.476793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.476821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.476833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.483928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.483955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.483966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.491226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.650 [2024-07-12 17:42:03.491261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.650 [2024-07-12 17:42:03.491273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.650 [2024-07-12 17:42:03.498855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.498882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.498898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.506448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.506474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.506486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.513853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.513881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.513893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.521127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.521154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.521166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.528353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.528380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.528392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.535620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.535647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.535659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.542917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.542942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.542954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.550192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.550219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.550230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.557439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.557466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.557478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.564757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.564789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.564801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.572081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.572109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.572121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.579569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.579595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.579607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.587158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.587186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.587197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.594560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.594587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.594599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.601827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.601853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.601865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.609044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.609071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.609083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.651 [2024-07-12 17:42:03.616446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.651 [2024-07-12 17:42:03.616472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.651 [2024-07-12 17:42:03.616484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.623980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.624006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.624018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.631538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.631565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.631577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.638767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.638794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.638806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.646042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.646070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.646081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.653419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.653447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.653459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.660917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.660943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.660955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.668249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.668282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.668294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.675564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.675592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.675604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.682552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.682579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.682591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.689763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.689789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.689805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.696960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.696987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.696999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.704215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.704242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.704261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.711214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.711241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.711260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.718507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.718535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.718547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.725729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.725756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.725767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.732956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.732982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.732994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.910 [2024-07-12 17:42:03.739697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd350e0) 00:31:24.910 [2024-07-12 17:42:03.739724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.910 [2024-07-12 17:42:03.739737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.910 00:31:24.910 Latency(us) 00:31:24.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.910 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:24.910 nvme0n1 : 2.00 4261.41 532.68 0.00 0.00 3750.12 1057.51 9532.51 00:31:24.910 =================================================================================================================== 00:31:24.910 Total : 4261.41 532.68 0.00 0.00 3750.12 1057.51 9532.51 00:31:24.910 0 00:31:24.910 17:42:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:24.910 17:42:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:24.910 | .driver_specific 00:31:24.910 | .nvme_error 00:31:24.910 | .status_code 00:31:24.910 | .command_transient_transport_error' 00:31:24.910 17:42:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:24.910 17:42:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:25.168 17:42:03 -- host/digest.sh@71 -- # (( 275 > 0 )) 00:31:25.168 17:42:03 -- host/digest.sh@73 -- # killprocess 111878 00:31:25.168 17:42:03 -- common/autotest_common.sh@926 -- # '[' -z 111878 ']' 00:31:25.168 17:42:03 -- common/autotest_common.sh@930 -- # kill -0 111878 00:31:25.168 17:42:03 -- common/autotest_common.sh@931 -- # uname 00:31:25.168 17:42:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:25.168 17:42:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111878 00:31:25.168 17:42:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:25.168 17:42:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:25.168 17:42:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111878' 00:31:25.168 killing process with pid 111878 00:31:25.168 17:42:04 -- common/autotest_common.sh@945 -- # kill 111878 00:31:25.168 Received shutdown signal, test time was about 2.000000 seconds 00:31:25.168 00:31:25.168 Latency(us) 00:31:25.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.168 =================================================================================================================== 00:31:25.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:25.168 17:42:04 -- common/autotest_common.sh@950 -- # wait 111878 00:31:25.426 17:42:04 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:31:25.426 17:42:04 -- host/digest.sh@54 -- # local rw bs qd 00:31:25.426 17:42:04 -- host/digest.sh@56 -- # rw=randwrite 00:31:25.426 17:42:04 -- host/digest.sh@56 -- # bs=4096 00:31:25.426 17:42:04 -- host/digest.sh@56 -- # qd=128 00:31:25.426 17:42:04 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:25.426 17:42:04 -- host/digest.sh@58 -- # bperfpid=112757 00:31:25.426 17:42:04 -- host/digest.sh@60 -- # waitforlisten 112757 /var/tmp/bperf.sock 00:31:25.426 17:42:04 -- common/autotest_common.sh@819 -- # '[' -z 112757 ']' 00:31:25.426 17:42:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:25.426 17:42:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:25.426 17:42:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:25.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:25.426 17:42:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:25.426 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:25.426 [2024-07-12 17:42:04.250070] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:25.426 [2024-07-12 17:42:04.250115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112757 ] 00:31:25.426 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.426 [2024-07-12 17:42:04.311768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.426 [2024-07-12 17:42:04.354143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.683 17:42:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:25.683 17:42:04 -- common/autotest_common.sh@852 -- # return 0 00:31:25.683 17:42:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:25.683 17:42:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:25.684 17:42:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:25.684 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.684 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:25.684 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.684 17:42:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:25.684 17:42:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:25.941 nvme0n1 00:31:25.941 17:42:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:25.941 17:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.941 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:25.941 17:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.941 17:42:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:25.941 17:42:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:26.199 Running I/O for 2 seconds... 00:31:26.199 [2024-07-12 17:42:05.002045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ed920 00:31:26.199 [2024-07-12 17:42:05.003243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.199 [2024-07-12 17:42:05.003286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:26.199 [2024-07-12 17:42:05.015673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.199 [2024-07-12 17:42:05.016899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.199 [2024-07-12 17:42:05.016926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:26.199 [2024-07-12 17:42:05.029271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.199 [2024-07-12 17:42:05.030528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.199 [2024-07-12 17:42:05.030554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:26.199 [2024-07-12 17:42:05.042789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.199 [2024-07-12 17:42:05.044053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.199 [2024-07-12 17:42:05.044079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:26.199 [2024-07-12 17:42:05.056325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.200 [2024-07-12 17:42:05.057604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.057630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:26.200 [2024-07-12 17:42:05.069836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.200 [2024-07-12 17:42:05.071129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.071154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:26.200 [2024-07-12 17:42:05.083320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.200 [2024-07-12 17:42:05.084630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.084655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:26.200 [2024-07-12 17:42:05.096809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.200 [2024-07-12 17:42:05.098131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.098155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:26.200 [2024-07-12 17:42:05.110310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.200 [2024-07-12 17:42:05.111646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.111671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:26.200 [2024-07-12 17:42:05.123826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.200 [2024-07-12 17:42:05.125173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.125197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:26.200 [2024-07-12 17:42:05.137505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e95a0 00:31:26.200 [2024-07-12 17:42:05.138652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.138678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:26.200 [2024-07-12 17:42:05.151009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eee38 00:31:26.200 [2024-07-12 17:42:05.152165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.152190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:26.200 [2024-07-12 17:42:05.164486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e1710 00:31:26.200 [2024-07-12 17:42:05.165661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.200 [2024-07-12 17:42:05.165686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:26.458 [2024-07-12 17:42:05.178017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e6b70 00:31:26.459 [2024-07-12 17:42:05.179193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.179217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.191496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f0bc0 00:31:26.459 [2024-07-12 17:42:05.192674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.192699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.204983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f0350 00:31:26.459 [2024-07-12 17:42:05.206182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.206207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.218847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e3498 00:31:26.459 [2024-07-12 17:42:05.219589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.219615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.232289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e12d8 00:31:26.459 [2024-07-12 17:42:05.232724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.232750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.245735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ee5c8 00:31:26.459 [2024-07-12 17:42:05.246136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.246161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.259163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e6738 00:31:26.459 [2024-07-12 17:42:05.259528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.259553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.272625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eaef0 00:31:26.459 [2024-07-12 17:42:05.272950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.272976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.286235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ec408 00:31:26.459 [2024-07-12 17:42:05.286529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.286555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.299517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190efae0 00:31:26.459 [2024-07-12 17:42:05.299764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.299789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.315389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f35f0 00:31:26.459 [2024-07-12 17:42:05.317320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.317345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.328917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eb328 00:31:26.459 [2024-07-12 17:42:05.330898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.330922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.342413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e0a68 00:31:26.459 [2024-07-12 17:42:05.344392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.344418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.355911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e99d8 00:31:26.459 [2024-07-12 17:42:05.357885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.357911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.369540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f0bc0 00:31:26.459 [2024-07-12 17:42:05.371334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.371360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.382365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e4de8 00:31:26.459 [2024-07-12 17:42:05.383427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.383452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.396070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f0788 00:31:26.459 [2024-07-12 17:42:05.397701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.397725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.409556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e4de8 00:31:26.459 [2024-07-12 17:42:05.411173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.411198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:26.459 [2024-07-12 17:42:05.422839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fc560 00:31:26.459 [2024-07-12 17:42:05.423692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.459 [2024-07-12 17:42:05.423716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.436133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f0ff8 00:31:26.718 [2024-07-12 17:42:05.437903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.437932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.449680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f2948 00:31:26.718 [2024-07-12 17:42:05.451148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.451173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.463274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f1868 00:31:26.718 [2024-07-12 17:42:05.464661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.464686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.476796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef270 00:31:26.718 [2024-07-12 17:42:05.478333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.478357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.490305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5220 00:31:26.718 [2024-07-12 17:42:05.491814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.491839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.503817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5ec8 00:31:26.718 [2024-07-12 17:42:05.505362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.505387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.517380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f2510 00:31:26.718 [2024-07-12 17:42:05.518846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.518870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.530919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ec840 00:31:26.718 [2024-07-12 17:42:05.532458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.532482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.544490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190edd58 00:31:26.718 [2024-07-12 17:42:05.546036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.718 [2024-07-12 17:42:05.546061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:26.718 [2024-07-12 17:42:05.558014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e84c0 00:31:26.718 [2024-07-12 17:42:05.559592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.559616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.571544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e6300 00:31:26.719 [2024-07-12 17:42:05.573125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.573150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.584355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e38d0 00:31:26.719 [2024-07-12 17:42:05.585480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.585504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.597763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e1710 00:31:26.719 [2024-07-12 17:42:05.598966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.598991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.613725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ed920 00:31:26.719 [2024-07-12 17:42:05.615712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.615736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.625493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ecc78 00:31:26.719 [2024-07-12 17:42:05.626534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.626559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.639304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e4de8 00:31:26.719 [2024-07-12 17:42:05.640636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.640661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.652813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e4de8 00:31:26.719 [2024-07-12 17:42:05.654172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.654196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.666317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e4de8 00:31:26.719 [2024-07-12 17:42:05.667675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.667699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:26.719 [2024-07-12 17:42:05.679712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e0630 00:31:26.719 [2024-07-12 17:42:05.681061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.719 [2024-07-12 17:42:05.681086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.693183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190efae0 00:31:26.978 [2024-07-12 17:42:05.694529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.694554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.706651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f4298 00:31:26.978 [2024-07-12 17:42:05.708617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.708642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.720123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef270 00:31:26.978 [2024-07-12 17:42:05.721460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.721486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.732467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5220 00:31:26.978 [2024-07-12 17:42:05.733661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.733685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.745953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f2d80 00:31:26.978 [2024-07-12 17:42:05.747085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.747111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.759459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e0630 00:31:26.978 [2024-07-12 17:42:05.760604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.760629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.773152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f3e60 00:31:26.978 [2024-07-12 17:42:05.774427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.774452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.786632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fbcf0 00:31:26.978 [2024-07-12 17:42:05.787821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.787853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.800147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5220 00:31:26.978 [2024-07-12 17:42:05.801344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.801369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.813609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f8618 00:31:26.978 [2024-07-12 17:42:05.814798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.814823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.827093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5658 00:31:26.978 [2024-07-12 17:42:05.828299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.828323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.840919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fb8b8 00:31:26.978 [2024-07-12 17:42:05.841909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.841934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.854574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef270 00:31:26.978 [2024-07-12 17:42:05.855578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.855602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.868059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f2510 00:31:26.978 [2024-07-12 17:42:05.869071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.869096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.881576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef6a8 00:31:26.978 [2024-07-12 17:42:05.882600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.882624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.895079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e0ea0 00:31:26.978 [2024-07-12 17:42:05.896122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.896146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.908542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e4140 00:31:26.978 [2024-07-12 17:42:05.909604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.909629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.922041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5ec8 00:31:26.978 [2024-07-12 17:42:05.923112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.923136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:26.978 [2024-07-12 17:42:05.935674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e4140 00:31:26.978 [2024-07-12 17:42:05.935792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.978 [2024-07-12 17:42:05.935814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:05.949311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f7538 00:31:27.244 [2024-07-12 17:42:05.949644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:05.949668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:05.962713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e3060 00:31:27.244 [2024-07-12 17:42:05.963014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:05.963039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:05.978382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e0ea0 00:31:27.244 [2024-07-12 17:42:05.980199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:05.980223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:05.991868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e3d08 00:31:27.244 [2024-07-12 17:42:05.993697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:05.993722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.005322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ebfd0 00:31:27.244 [2024-07-12 17:42:06.007160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.007185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.018808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eb760 00:31:27.244 [2024-07-12 17:42:06.020672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.020696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.032308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e0630 00:31:27.244 [2024-07-12 17:42:06.034184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.034208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.045973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f6cc8 00:31:27.244 [2024-07-12 17:42:06.047681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.047706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.059467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eb760 00:31:27.244 [2024-07-12 17:42:06.061175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.061200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.072919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eea00 00:31:27.244 [2024-07-12 17:42:06.074654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.074679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.086383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef270 00:31:27.244 [2024-07-12 17:42:06.088128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.088152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.099886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5ec8 00:31:27.244 [2024-07-12 17:42:06.101648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.101672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.113374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e3060 00:31:27.244 [2024-07-12 17:42:06.115147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.115171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.126898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ee5c8 00:31:27.244 [2024-07-12 17:42:06.128690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.128714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.140544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef270 00:31:27.244 [2024-07-12 17:42:06.141357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.141385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.152309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef6a8 00:31:27.244 [2024-07-12 17:42:06.153523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.153548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.165753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef6a8 00:31:27.244 [2024-07-12 17:42:06.167085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.167109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.181158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f7100 00:31:27.244 [2024-07-12 17:42:06.182055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.182080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.194578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190dece0 00:31:27.244 [2024-07-12 17:42:06.195439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.195463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.244 [2024-07-12 17:42:06.208035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e4140 00:31:27.244 [2024-07-12 17:42:06.208863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.244 [2024-07-12 17:42:06.208888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.221452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e8d30 00:31:27.507 [2024-07-12 17:42:06.222235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.222269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.234878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5ec8 00:31:27.507 [2024-07-12 17:42:06.235623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.235648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.248324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eff18 00:31:27.507 [2024-07-12 17:42:06.249064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.249089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.261747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f35f0 00:31:27.507 [2024-07-12 17:42:06.262540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.262566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.275203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e6738 00:31:27.507 [2024-07-12 17:42:06.276035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.276060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.288460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ec408 00:31:27.507 [2024-07-12 17:42:06.290235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.290266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.301886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f4f40 00:31:27.507 [2024-07-12 17:42:06.303125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.303150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.316344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ed920 00:31:27.507 [2024-07-12 17:42:06.317273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.317298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.327911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f8618 00:31:27.507 [2024-07-12 17:42:06.329315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.329339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.341348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ea248 00:31:27.507 [2024-07-12 17:42:06.342870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.342894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.354839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190dece0 00:31:27.507 [2024-07-12 17:42:06.356465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.356490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.367664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5658 00:31:27.507 [2024-07-12 17:42:06.368690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.368715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.381133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f5be8 00:31:27.507 [2024-07-12 17:42:06.382170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.382194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.394630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fac10 00:31:27.507 [2024-07-12 17:42:06.395682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.395706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.408099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f81e0 00:31:27.507 [2024-07-12 17:42:06.409167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.409192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.421598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f1430 00:31:27.507 [2024-07-12 17:42:06.422680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.422704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.435125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eea00 00:31:27.507 [2024-07-12 17:42:06.436224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.436248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.448628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190df988 00:31:27.507 [2024-07-12 17:42:06.449740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.449765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.507 [2024-07-12 17:42:06.462140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e0630 00:31:27.507 [2024-07-12 17:42:06.463266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.507 [2024-07-12 17:42:06.463291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.475993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ea680 00:31:27.765 [2024-07-12 17:42:06.476627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.476651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.489397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190eff18 00:31:27.765 [2024-07-12 17:42:06.489733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.489761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.502829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5a90 00:31:27.765 [2024-07-12 17:42:06.503122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.503146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.518504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f6458 00:31:27.765 [2024-07-12 17:42:06.520315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.520340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.531982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ea680 00:31:27.765 [2024-07-12 17:42:06.533806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.533829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.545480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f4b08 00:31:27.765 [2024-07-12 17:42:06.547322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.547346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.558986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f2d80 00:31:27.765 [2024-07-12 17:42:06.560846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.560870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.572484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef6a8 00:31:27.765 [2024-07-12 17:42:06.574348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.574372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.585960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fa3a0 00:31:27.765 [2024-07-12 17:42:06.587843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.587867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.599556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f4298 00:31:27.765 [2024-07-12 17:42:06.601224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.601249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.612652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e5658 00:31:27.765 [2024-07-12 17:42:06.614150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.614174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.626046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190dfdc0 00:31:27.765 [2024-07-12 17:42:06.627547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.627572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.639444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fac10 00:31:27.765 [2024-07-12 17:42:06.640918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.640942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.652824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f1430 00:31:27.765 [2024-07-12 17:42:06.654307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.654333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.666527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e38d0 00:31:27.765 [2024-07-12 17:42:06.667720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.667745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.680214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fc998 00:31:27.765 [2024-07-12 17:42:06.681656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.681680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.693697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fcdd0 00:31:27.765 [2024-07-12 17:42:06.695151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.695175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.707164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fcdd0 00:31:27.765 [2024-07-12 17:42:06.708633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.708658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.765 [2024-07-12 17:42:06.720664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fc998 00:31:27.765 [2024-07-12 17:42:06.722196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.765 [2024-07-12 17:42:06.722220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.734192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e38d0 00:31:28.024 [2024-07-12 17:42:06.735686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.735709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.747687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f7970 00:31:28.024 [2024-07-12 17:42:06.749194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.749219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.761171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f7538 00:31:28.024 [2024-07-12 17:42:06.762698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.762722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.772698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190dece0 00:31:28.024 [2024-07-12 17:42:06.773553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.773578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.786214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190efae0 00:31:28.024 [2024-07-12 17:42:06.787086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.787111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.799719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190dfdc0 00:31:28.024 [2024-07-12 17:42:06.800619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.800644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.813222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190efae0 00:31:28.024 [2024-07-12 17:42:06.814124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.814149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.826728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190dece0 00:31:28.024 [2024-07-12 17:42:06.827646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.827670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.840448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ef6a8 00:31:28.024 [2024-07-12 17:42:06.841381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.841410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.853985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ff3c8 00:31:28.024 [2024-07-12 17:42:06.854941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.854966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.867594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f35f0 00:31:28.024 [2024-07-12 17:42:06.868559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.868584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.882887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e6738 00:31:28.024 [2024-07-12 17:42:06.884758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.884784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.896343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190e6300 00:31:28.024 [2024-07-12 17:42:06.898153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.898179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.909864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fe2e8 00:31:28.024 [2024-07-12 17:42:06.911100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.911126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.924274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ddc00 00:31:28.024 [2024-07-12 17:42:06.925286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.925311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.935892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f2510 00:31:28.024 [2024-07-12 17:42:06.937391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.937415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.949387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190f2510 00:31:28.024 [2024-07-12 17:42:06.950991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.951016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.961874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fc998 00:31:28.024 [2024-07-12 17:42:06.963129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.963154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.975734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190fc560 00:31:28.024 [2024-07-12 17:42:06.976873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.976897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:28.024 [2024-07-12 17:42:06.989183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42370) with pdu=0x2000190ea680 00:31:28.024 [2024-07-12 17:42:06.989874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.024 [2024-07-12 17:42:06.989898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:28.323 00:31:28.323 Latency(us) 00:31:28.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.323 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:28.323 nvme0n1 : 2.01 18819.58 73.51 0.00 0.00 6793.29 3291.69 17992.61 00:31:28.323 =================================================================================================================== 00:31:28.323 Total : 18819.58 73.51 0.00 0.00 6793.29 3291.69 17992.61 00:31:28.323 0 00:31:28.323 17:42:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:28.323 17:42:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:28.323 | .driver_specific 00:31:28.323 | .nvme_error 00:31:28.323 | .status_code 00:31:28.323 | .command_transient_transport_error' 00:31:28.323 17:42:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:28.323 17:42:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:28.323 17:42:07 -- host/digest.sh@71 -- # (( 148 > 0 )) 00:31:28.323 17:42:07 -- host/digest.sh@73 -- # killprocess 112757 00:31:28.323 17:42:07 -- common/autotest_common.sh@926 -- # '[' -z 112757 ']' 00:31:28.323 17:42:07 -- common/autotest_common.sh@930 -- # kill -0 112757 00:31:28.323 17:42:07 -- common/autotest_common.sh@931 -- # uname 00:31:28.323 17:42:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:28.323 17:42:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112757 00:31:28.613 17:42:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:28.613 17:42:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:28.613 17:42:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112757' 00:31:28.613 killing process with pid 112757 00:31:28.613 17:42:07 -- common/autotest_common.sh@945 -- # kill 112757 00:31:28.613 Received shutdown signal, test time was about 2.000000 seconds 00:31:28.613 00:31:28.613 Latency(us) 00:31:28.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.613 =================================================================================================================== 00:31:28.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:28.613 17:42:07 -- common/autotest_common.sh@950 -- # wait 112757 00:31:28.613 17:42:07 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:31:28.613 17:42:07 -- host/digest.sh@54 -- # local rw bs qd 00:31:28.613 17:42:07 -- host/digest.sh@56 -- # rw=randwrite 00:31:28.613 17:42:07 -- host/digest.sh@56 -- # bs=131072 00:31:28.613 17:42:07 -- host/digest.sh@56 -- # qd=16 00:31:28.613 17:42:07 -- host/digest.sh@58 -- # bperfpid=113322 00:31:28.613 17:42:07 -- host/digest.sh@60 -- # waitforlisten 113322 /var/tmp/bperf.sock 00:31:28.613 17:42:07 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:28.613 17:42:07 -- common/autotest_common.sh@819 -- # '[' -z 113322 ']' 00:31:28.613 17:42:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:28.613 17:42:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:28.613 17:42:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:28.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:28.613 17:42:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:28.613 17:42:07 -- common/autotest_common.sh@10 -- # set +x 00:31:28.613 [2024-07-12 17:42:07.552211] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:28.613 [2024-07-12 17:42:07.552279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113322 ] 00:31:28.613 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:28.613 Zero copy mechanism will not be used. 00:31:28.881 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.881 [2024-07-12 17:42:07.623975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.881 [2024-07-12 17:42:07.666516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.817 17:42:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:29.817 17:42:08 -- common/autotest_common.sh@852 -- # return 0 00:31:29.817 17:42:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:29.817 17:42:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:29.817 17:42:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:29.817 17:42:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.817 17:42:08 -- common/autotest_common.sh@10 -- # set +x 00:31:29.817 17:42:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.817 17:42:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:29.817 17:42:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:30.075 nvme0n1 00:31:30.075 17:42:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:30.075 17:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.075 17:42:09 -- common/autotest_common.sh@10 -- # set +x 00:31:30.075 17:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.075 17:42:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:30.075 17:42:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:30.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:30.334 Zero copy mechanism will not be used. 00:31:30.334 Running I/O for 2 seconds... 00:31:30.334 [2024-07-12 17:42:09.141107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.334 [2024-07-12 17:42:09.141568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.334 [2024-07-12 17:42:09.141603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.334 [2024-07-12 17:42:09.146520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.334 [2024-07-12 17:42:09.146700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.334 [2024-07-12 17:42:09.146730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.334 [2024-07-12 17:42:09.151456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.334 [2024-07-12 17:42:09.151583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.151612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.156359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.156459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.156484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.161203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.161308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.161333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.166073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.166175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.166199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.170987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.171145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.171169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.175997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.176316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.176342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.180759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.181084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.181110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.185680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.185880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.185904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.190475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.190592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.190624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.195346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.195459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.195484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.200117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.200234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.200264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.204994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.205145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.205168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.209822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.209981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.210005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.214761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.215071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.215097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.219621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.219944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.219969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.224417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.224602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.224625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.229327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.229449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.229473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.234113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.234230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.234253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.238966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.239076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.239099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.243801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.243958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.243981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.248645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.248801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.248825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.253652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.253938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.253964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.259178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.259467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.259492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.265162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.265428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.265454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.271513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.271685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.271708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.278686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.278828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.278852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.284242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.284378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.284402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.289715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.289933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.289958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.335 [2024-07-12 17:42:09.295804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.335 [2024-07-12 17:42:09.295932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.335 [2024-07-12 17:42:09.295956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.595 [2024-07-12 17:42:09.303097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.595 [2024-07-12 17:42:09.303393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.595 [2024-07-12 17:42:09.303419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.595 [2024-07-12 17:42:09.308641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.595 [2024-07-12 17:42:09.308870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.595 [2024-07-12 17:42:09.308894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.595 [2024-07-12 17:42:09.314225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.595 [2024-07-12 17:42:09.314395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.595 [2024-07-12 17:42:09.314419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.595 [2024-07-12 17:42:09.319557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.595 [2024-07-12 17:42:09.319711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.595 [2024-07-12 17:42:09.319734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.595 [2024-07-12 17:42:09.325071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.595 [2024-07-12 17:42:09.325164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.325188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.331651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.331784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.331811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.339011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.339175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.339199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.345920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.346394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.346420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.354045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.354395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.354421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.360535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.360732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.360756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.365788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.365891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.365914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.370922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.371115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.371138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.376308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.376434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.376457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.381971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.382125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.382148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.388272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.388420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.388444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.393718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.393925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.393949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.398698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.398977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.399002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.403577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.403763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.403787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.408411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.408549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.408573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.413929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.414186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.414211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.420355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.420490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.420514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.427207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.427288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.427312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.434250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.434477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.434502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.441077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.441276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.441301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.447964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.448230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.448262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.455096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.455296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.455322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.460582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.460770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.460795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.465816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.465968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.465993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.471188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.471363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.471389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.476589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.476751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.476776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.483159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.483456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.483481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.490189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.490456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.490484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.497527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.596 [2024-07-12 17:42:09.497773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.596 [2024-07-12 17:42:09.497798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.596 [2024-07-12 17:42:09.505339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.505633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.505658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.513301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.513543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.513568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.521020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.521194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.521219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.528562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.528741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.528765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.534904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.535060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.535084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.540788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.541015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.541040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.545874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.546101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.546127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.550919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.551132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.551157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.556040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.556219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.556242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.597 [2024-07-12 17:42:09.561008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.597 [2024-07-12 17:42:09.561185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.597 [2024-07-12 17:42:09.561209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.565956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.566116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.566140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.571682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.571853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.571876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.577410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.577649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.577673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.583275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.583489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.583514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.589414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.589634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.589658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.595417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.595598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.595626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.601290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.601654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.601679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.606417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.606565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.606589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.611332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.611480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.611503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.616347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.616537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.616561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.621178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.621360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.621384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.626324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.626540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.626565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.631567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.631866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.631891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.636789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.636981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.637004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.641808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.641994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.642018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.646768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.646974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.646997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.651673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.651826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.651850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.656562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.656733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.656756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.661498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.661679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.661702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.666402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.666572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.666597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.671314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.671541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.671567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.676591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.676836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.676861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.682490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.682764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.682789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.688509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.688731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.688756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.694610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.694792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.694816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.857 [2024-07-12 17:42:09.700483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.857 [2024-07-12 17:42:09.700670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.857 [2024-07-12 17:42:09.700694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.706215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.706365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.706388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.711324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.711504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.711528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.716370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.716619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.716644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.721303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.721521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.721546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.726333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.726511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.726535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.731503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.731657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.731686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.736409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.736554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.736577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.741451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.741634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.741657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.747107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.747308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.747332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.752348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.752512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.752535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.757380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.757599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.757625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.762294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.762433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.762457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.767412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.767563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.767586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.772349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.772515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.772538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.777305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.777439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.777463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.782229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.782451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.782477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.787166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.787382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.787407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.792159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.792315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.792340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.797177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.797398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.797422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.802157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.802354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.802377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.807101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.807286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.807310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.812023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.812244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.812275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.817059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.817211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.817234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:30.858 [2024-07-12 17:42:09.822699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:30.858 [2024-07-12 17:42:09.822838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:30.858 [2024-07-12 17:42:09.822860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.828726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.828886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.828910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.835865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.836102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.836127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.843644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.843823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.843847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.850360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.850518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.850543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.856823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.856960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.856985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.862855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.863002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.863027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.868439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.868520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.868546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.874686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.874840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.874870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.881353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.881499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.881525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.887584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.887732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.887759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.893884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.118 [2024-07-12 17:42:09.894062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.118 [2024-07-12 17:42:09.894086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.118 [2024-07-12 17:42:09.899857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.900013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.900037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.905892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.906019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.906042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.911563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.911718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.911742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.917213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.917357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.917381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.922090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.922274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.922297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.927022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.927199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.927222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.931954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.932113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.932137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.936856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.937070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.937096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.941693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.941871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.941894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.946548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.946723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.946746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.951420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.951579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.951603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.956241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.956355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.956378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.961128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.961297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.961321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.966100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.966225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.966252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.971873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.972025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.972049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.977204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.977439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.977463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.982132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.982303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.982328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.987054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.987231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.987263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.991975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.992157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.992181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:09.996881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:09.996983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:09.997007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:10.001759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:10.001913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:10.001937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:10.006741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:10.006915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:10.006941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:10.011630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:10.011806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:10.011832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:10.019652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:10.019846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:10.019872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:10.024983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:10.025158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:10.025181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:10.029911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:10.030093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:10.030117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:10.034862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:10.035017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.119 [2024-07-12 17:42:10.035041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.119 [2024-07-12 17:42:10.039795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.119 [2024-07-12 17:42:10.039910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.120 [2024-07-12 17:42:10.039933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.120 [2024-07-12 17:42:10.045279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.120 [2024-07-12 17:42:10.045534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.120 [2024-07-12 17:42:10.045558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.120 [2024-07-12 17:42:10.051683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.120 [2024-07-12 17:42:10.051882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.120 [2024-07-12 17:42:10.051906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.120 [2024-07-12 17:42:10.058106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.120 [2024-07-12 17:42:10.058268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.120 [2024-07-12 17:42:10.058292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.120 [2024-07-12 17:42:10.065049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.120 [2024-07-12 17:42:10.065230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.120 [2024-07-12 17:42:10.065262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.120 [2024-07-12 17:42:10.071378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.120 [2024-07-12 17:42:10.072223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.120 [2024-07-12 17:42:10.072309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.120 [2024-07-12 17:42:10.077883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.120 [2024-07-12 17:42:10.078053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.120 [2024-07-12 17:42:10.078077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.120 [2024-07-12 17:42:10.084515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.120 [2024-07-12 17:42:10.084699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.120 [2024-07-12 17:42:10.084723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.380 [2024-07-12 17:42:10.089931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.380 [2024-07-12 17:42:10.090034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.380 [2024-07-12 17:42:10.090058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.380 [2024-07-12 17:42:10.094990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.380 [2024-07-12 17:42:10.095153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.380 [2024-07-12 17:42:10.095176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.380 [2024-07-12 17:42:10.100044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.380 [2024-07-12 17:42:10.100223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.380 [2024-07-12 17:42:10.100247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.380 [2024-07-12 17:42:10.105136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.380 [2024-07-12 17:42:10.105305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.380 [2024-07-12 17:42:10.105329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.380 [2024-07-12 17:42:10.110245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.380 [2024-07-12 17:42:10.110498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.380 [2024-07-12 17:42:10.110533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.380 [2024-07-12 17:42:10.115306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.380 [2024-07-12 17:42:10.115488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.380 [2024-07-12 17:42:10.115511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.380 [2024-07-12 17:42:10.120313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.120512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.120535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.125811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.125977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.126000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.132446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.132600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.132624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.139245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.139406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.139429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.146363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.146487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.146510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.151711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.151817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.151841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.156715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.156889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.156915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.161588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.161698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.161724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.167576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.167718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.167742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.173108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.173209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.173233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.178060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.178174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.178198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.183072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.183248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.183278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.188006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.188117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.188139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.193052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.193277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.193303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.197975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.198140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.198163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.202947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.203103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.203126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.207942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.208096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.208120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.212846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.212960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.212984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.217756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.217923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.217946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.222660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.222834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.222857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.227632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.227795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.227818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.232583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.232786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.232811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.381 [2024-07-12 17:42:10.237546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.381 [2024-07-12 17:42:10.237700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.381 [2024-07-12 17:42:10.237723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.242450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.242618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.247419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.247602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.247632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.252290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.252389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.252413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.257288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.257437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.257461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.262267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.262414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.262437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.267195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.267371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.267395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.272178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.272415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.272441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.277093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.277252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.277285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.282064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.282250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.282280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.287009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.287168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.287191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.291912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.292025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.292050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.296834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.296984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.297006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.301776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.301921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.301944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.306700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.306879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.306903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.311664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.311884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.311909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.316586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.316762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.316785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.321510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.321691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.321714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.326436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.326614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.326638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.331324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.331426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.331449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.336264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.336393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.336416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.341211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.382 [2024-07-12 17:42:10.341375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.382 [2024-07-12 17:42:10.341398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.382 [2024-07-12 17:42:10.346125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.346295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.346318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.642 [2024-07-12 17:42:10.351052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.351280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.351303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.642 [2024-07-12 17:42:10.355976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.356155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.356178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.642 [2024-07-12 17:42:10.360949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.361128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.361151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.642 [2024-07-12 17:42:10.365890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.366039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.366062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.642 [2024-07-12 17:42:10.370820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.370915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.370938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.642 [2024-07-12 17:42:10.375749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.375905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.375932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.642 [2024-07-12 17:42:10.380672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.380818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.380841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.642 [2024-07-12 17:42:10.385580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.642 [2024-07-12 17:42:10.385741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.642 [2024-07-12 17:42:10.385764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.390534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.390758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.390783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.395453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.395611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.395634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.400332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.400498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.400521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.405227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.405416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.405439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.410168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.410281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.410305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.415105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.415275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.415299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.420076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.420207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.420230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.425030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.425205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.425227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.429993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.430212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.430235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.434951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.435097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.435120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.439849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.440001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.440025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.444809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.444971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.444994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.449695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.449802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.449825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.454648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.454799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.454821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.459558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.459730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.459758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.464460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.464626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.464649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.469466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.469695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.469720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.474398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.474566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.474589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.479324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.479493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.479516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.484295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.484463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.484486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.489140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.489275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.489299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.494048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.494224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.494246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.498946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.499090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.499114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.503870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.504051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.504075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.508828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.509057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.509083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.513779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.513946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.513969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.518716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.518879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.518903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.523671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.523844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.523867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.643 [2024-07-12 17:42:10.528601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.643 [2024-07-12 17:42:10.528699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.643 [2024-07-12 17:42:10.528724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.533539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.533719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.533743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.538488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.538656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.538680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.543433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.543594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.543617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.548527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.548727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.548752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.553466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.553648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.553671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.558471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.558661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.558684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.563439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.563620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.563643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.568325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.568447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.568470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.573230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.573399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.573422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.578174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.578344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.578367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.583099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.583282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.583305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.588118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.588323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.588353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.593042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.593220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.593243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.598016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.598169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.598191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.602956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.603111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.603134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.644 [2024-07-12 17:42:10.607922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.644 [2024-07-12 17:42:10.608018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.644 [2024-07-12 17:42:10.608041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.612858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.613011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.613034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.617876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.618062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.618085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.622827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.623005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.623028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.627854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.628082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.628109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.632828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.632990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.633013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.637830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.637990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.638013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.643190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.643407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.643431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.649405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.649582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.649605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.655929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.656108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.656131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.663643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.663798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.663822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.670682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.670953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.670978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.675940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.676187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.904 [2024-07-12 17:42:10.676212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.904 [2024-07-12 17:42:10.681343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.904 [2024-07-12 17:42:10.681536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.681559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.687203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.687313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.687336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.693351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.693482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.693505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.700642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.700905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.700930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.707466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.707659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.707682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.712750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.712860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.712883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.717964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.718209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.718234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.723796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.724034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.724058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.730155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.730415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.730440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.735958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.736108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.736135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.741066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.741188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.741211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.746140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.746323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.746346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.751286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.751464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.751487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.756512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.756656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.756680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.762431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.762684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.762710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.769132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.769380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.769405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.775855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.775990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.776013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.783008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.783159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.783181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.790598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.790800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.790832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.798717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.798872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.798895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.806179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.806333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.806357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.812862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.813089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.813113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.819418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.819557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.819580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.825646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.825794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.825818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.831465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.831548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.831571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.837956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.838105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.838129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.843649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.843767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.843789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.849725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.849885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.849908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.855839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.905 [2024-07-12 17:42:10.855944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.905 [2024-07-12 17:42:10.855968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:31.905 [2024-07-12 17:42:10.861844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.906 [2024-07-12 17:42:10.861921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.906 [2024-07-12 17:42:10.861945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:31.906 [2024-07-12 17:42:10.868251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:31.906 [2024-07-12 17:42:10.868375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:31.906 [2024-07-12 17:42:10.868398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.874055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.874168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.874191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.879211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.879410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.879435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.884282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.884401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.884424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.889352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.889514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.889537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.894394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.894574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.894597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.899448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.899630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.899653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.904533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.904758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.904783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.909634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.909802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.909825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.914774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.914940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.914963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.919840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.920005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.920029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.924973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.925077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.925101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.930024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.930176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.930199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.935057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.935237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.935267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.166 [2024-07-12 17:42:10.940128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.166 [2024-07-12 17:42:10.940289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.166 [2024-07-12 17:42:10.940312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.945202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.945437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.945461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.950264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.950453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.950477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.955321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.955496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.955520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.960306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.960475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.960497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.966072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.966150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.966174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.972506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.972655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.972678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.978079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.978204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.978227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.983938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.984101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.984129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.989776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.989985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.990010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:10.994937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:10.995101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:10.995125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.000033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.000207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.000230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.005071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.005226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.005249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.010160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.010279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.010303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.015194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.015322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.015345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.020289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.020451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.020473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.025366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.025526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.025549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.030472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.030703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.030728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.035514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.035691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.035714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.040595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.040741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.040764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.045649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.045799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.045823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.050676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.050775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.050798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.055727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.055904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.055927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.060777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.060942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.060965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.065823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.065977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.066000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.070886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.071114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.071138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.075929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.076107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.076131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.080975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.081146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.081169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.086035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.167 [2024-07-12 17:42:11.086211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.167 [2024-07-12 17:42:11.086233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.167 [2024-07-12 17:42:11.091050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.091155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.091178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.168 [2024-07-12 17:42:11.096054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.096198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.096222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.168 [2024-07-12 17:42:11.101076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.101237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.101268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.168 [2024-07-12 17:42:11.106153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.106320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.106343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.168 [2024-07-12 17:42:11.111227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.111464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.111489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.168 [2024-07-12 17:42:11.116252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.116440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.168 [2024-07-12 17:42:11.121315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.121467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.121490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:32.168 [2024-07-12 17:42:11.126404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.126600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.126625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:32.168 [2024-07-12 17:42:11.131421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.168 [2024-07-12 17:42:11.131530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.168 [2024-07-12 17:42:11.131553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:32.428 [2024-07-12 17:42:11.136352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d42510) with pdu=0x2000190fef90 00:31:32.428 [2024-07-12 17:42:11.136454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.428 [2024-07-12 17:42:11.136477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:32.428 00:31:32.428 Latency(us) 00:31:32.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.428 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:32.428 nvme0n1 : 2.00 5714.44 714.30 0.00 0.00 2794.29 2219.29 8877.15 00:31:32.428 =================================================================================================================== 00:31:32.428 Total : 5714.44 714.30 0.00 0.00 2794.29 2219.29 8877.15 00:31:32.428 0 00:31:32.428 17:42:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:32.428 17:42:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:32.428 17:42:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:32.428 | .driver_specific 00:31:32.428 | .nvme_error 00:31:32.428 | .status_code 00:31:32.428 | .command_transient_transport_error' 00:31:32.428 17:42:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:32.428 17:42:11 -- host/digest.sh@71 -- # (( 369 > 0 )) 00:31:32.428 17:42:11 -- host/digest.sh@73 -- # killprocess 113322 00:31:32.428 17:42:11 -- common/autotest_common.sh@926 -- # '[' -z 113322 ']' 00:31:32.428 17:42:11 -- common/autotest_common.sh@930 -- # kill -0 113322 00:31:32.428 17:42:11 -- common/autotest_common.sh@931 -- # uname 00:31:32.428 17:42:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:32.428 17:42:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113322 00:31:32.428 17:42:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:32.428 17:42:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:32.428 17:42:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113322' 00:31:32.428 killing process with pid 113322 00:31:32.428 17:42:11 -- common/autotest_common.sh@945 -- # kill 113322 00:31:32.428 Received shutdown signal, test time was about 2.000000 seconds 00:31:32.428 00:31:32.428 Latency(us) 00:31:32.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.428 =================================================================================================================== 00:31:32.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:32.428 17:42:11 -- common/autotest_common.sh@950 -- # wait 113322 00:31:32.687 17:42:11 -- host/digest.sh@115 -- # killprocess 110791 00:31:32.687 17:42:11 -- common/autotest_common.sh@926 -- # '[' -z 110791 ']' 00:31:32.687 17:42:11 -- common/autotest_common.sh@930 -- # kill -0 110791 00:31:32.687 17:42:11 -- common/autotest_common.sh@931 -- # uname 00:31:32.687 17:42:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:32.687 17:42:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110791 00:31:32.687 17:42:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:32.687 17:42:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:32.688 17:42:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110791' 00:31:32.688 killing process with pid 110791 00:31:32.688 17:42:11 -- common/autotest_common.sh@945 -- # kill 110791 00:31:32.688 17:42:11 -- common/autotest_common.sh@950 -- # wait 110791 00:31:32.947 00:31:32.947 real 0m16.998s 00:31:32.947 user 0m33.376s 00:31:32.947 sys 0m4.333s 00:31:32.947 17:42:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:32.947 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:31:32.947 ************************************ 00:31:32.947 END TEST nvmf_digest_error 00:31:32.947 ************************************ 00:31:32.947 17:42:11 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:31:32.947 17:42:11 -- host/digest.sh@139 -- # nvmftestfini 00:31:32.947 17:42:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:32.947 17:42:11 -- nvmf/common.sh@116 -- # sync 00:31:32.947 17:42:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:32.947 17:42:11 -- nvmf/common.sh@119 -- # set +e 00:31:32.947 17:42:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:32.947 17:42:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:32.947 rmmod nvme_tcp 00:31:32.947 rmmod nvme_fabrics 00:31:32.947 rmmod nvme_keyring 00:31:32.947 17:42:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:32.947 17:42:11 -- nvmf/common.sh@123 -- # set -e 00:31:32.947 17:42:11 -- nvmf/common.sh@124 -- # return 0 00:31:32.947 17:42:11 -- nvmf/common.sh@477 -- # '[' -n 110791 ']' 00:31:32.947 17:42:11 -- nvmf/common.sh@478 -- # killprocess 110791 00:31:32.947 17:42:11 -- common/autotest_common.sh@926 -- # '[' -z 110791 ']' 00:31:32.947 17:42:11 -- common/autotest_common.sh@930 -- # kill -0 110791 00:31:32.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (110791) - No such process 00:31:32.947 17:42:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 110791 is not found' 00:31:32.947 Process with pid 110791 is not found 00:31:32.947 17:42:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:32.947 17:42:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:32.947 17:42:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:32.947 17:42:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:32.947 17:42:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:32.947 17:42:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.947 17:42:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:32.947 17:42:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.483 17:42:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:35.483 00:31:35.483 real 0m40.228s 00:31:35.483 user 1m5.369s 00:31:35.483 sys 0m12.973s 00:31:35.483 17:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:35.483 17:42:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.483 ************************************ 00:31:35.483 END TEST nvmf_digest 00:31:35.483 ************************************ 00:31:35.483 17:42:13 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:31:35.483 17:42:13 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:31:35.483 17:42:13 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:31:35.483 17:42:13 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:35.483 17:42:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:35.483 17:42:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:35.483 17:42:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.483 ************************************ 00:31:35.483 START TEST nvmf_bdevperf 00:31:35.483 ************************************ 00:31:35.483 17:42:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:35.483 * Looking for test storage... 00:31:35.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.483 17:42:14 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.483 17:42:14 -- nvmf/common.sh@7 -- # uname -s 00:31:35.483 17:42:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.483 17:42:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.483 17:42:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.483 17:42:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.483 17:42:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.483 17:42:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.483 17:42:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.483 17:42:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.483 17:42:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.483 17:42:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.483 17:42:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:35.483 17:42:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:35.483 17:42:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.483 17:42:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.483 17:42:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.483 17:42:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.483 17:42:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.483 17:42:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.483 17:42:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.483 17:42:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.483 17:42:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.483 17:42:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.483 17:42:14 -- paths/export.sh@5 -- # export PATH 00:31:35.483 17:42:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.483 17:42:14 -- nvmf/common.sh@46 -- # : 0 00:31:35.483 17:42:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:35.483 17:42:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:35.483 17:42:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:35.483 17:42:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.483 17:42:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.483 17:42:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:35.483 17:42:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:35.483 17:42:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:35.483 17:42:14 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:35.483 17:42:14 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:35.483 17:42:14 -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:35.483 17:42:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:35.483 17:42:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.483 17:42:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:35.483 17:42:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:35.483 17:42:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:35.483 17:42:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.483 17:42:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.483 17:42:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.483 17:42:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:35.483 17:42:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:35.483 17:42:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:35.483 17:42:14 -- common/autotest_common.sh@10 -- # set +x 00:31:40.767 17:42:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:40.767 17:42:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:40.767 17:42:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:40.767 17:42:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:40.767 17:42:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:40.767 17:42:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:40.767 17:42:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:40.767 17:42:19 -- nvmf/common.sh@294 -- # net_devs=() 00:31:40.767 17:42:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:40.767 17:42:19 -- nvmf/common.sh@295 -- # e810=() 00:31:40.767 17:42:19 -- nvmf/common.sh@295 -- # local -ga e810 00:31:40.767 17:42:19 -- nvmf/common.sh@296 -- # x722=() 00:31:40.767 17:42:19 -- nvmf/common.sh@296 -- # local -ga x722 00:31:40.767 17:42:19 -- nvmf/common.sh@297 -- # mlx=() 00:31:40.767 17:42:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:40.767 17:42:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.767 17:42:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:40.767 17:42:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:40.767 17:42:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:40.767 17:42:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:40.767 17:42:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:40.767 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:40.767 17:42:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:40.767 17:42:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:40.767 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:40.767 17:42:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:40.767 17:42:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:40.767 17:42:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.767 17:42:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:40.767 17:42:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.767 17:42:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:40.767 Found net devices under 0000:af:00.0: cvl_0_0 00:31:40.767 17:42:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.767 17:42:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:40.767 17:42:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.767 17:42:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:40.767 17:42:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.767 17:42:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:40.767 Found net devices under 0000:af:00.1: cvl_0_1 00:31:40.767 17:42:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.767 17:42:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:40.767 17:42:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:40.767 17:42:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:40.767 17:42:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.767 17:42:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.767 17:42:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.767 17:42:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:40.767 17:42:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.767 17:42:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.767 17:42:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:40.767 17:42:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.767 17:42:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.767 17:42:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:40.767 17:42:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:40.767 17:42:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.767 17:42:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.767 17:42:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.767 17:42:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.767 17:42:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:40.767 17:42:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.767 17:42:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.767 17:42:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.767 17:42:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:40.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:31:40.767 00:31:40.767 --- 10.0.0.2 ping statistics --- 00:31:40.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.767 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:31:40.767 17:42:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:31:40.767 00:31:40.767 --- 10.0.0.1 ping statistics --- 00:31:40.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.767 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:40.767 17:42:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.767 17:42:19 -- nvmf/common.sh@410 -- # return 0 00:31:40.767 17:42:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:40.767 17:42:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.767 17:42:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:40.767 17:42:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.767 17:42:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:40.767 17:42:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:40.767 17:42:19 -- host/bdevperf.sh@25 -- # tgt_init 00:31:40.767 17:42:19 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:40.767 17:42:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:40.767 17:42:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:40.767 17:42:19 -- common/autotest_common.sh@10 -- # set +x 00:31:40.767 17:42:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:40.767 17:42:19 -- nvmf/common.sh@469 -- # nvmfpid=118001 00:31:40.767 17:42:19 -- nvmf/common.sh@470 -- # waitforlisten 118001 00:31:40.767 17:42:19 -- common/autotest_common.sh@819 -- # '[' -z 118001 ']' 00:31:40.767 17:42:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.767 17:42:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:40.767 17:42:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.767 17:42:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:40.767 17:42:19 -- common/autotest_common.sh@10 -- # set +x 00:31:40.767 [2024-07-12 17:42:19.653638] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:40.767 [2024-07-12 17:42:19.653694] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.767 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.767 [2024-07-12 17:42:19.730214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:41.026 [2024-07-12 17:42:19.773886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:41.026 [2024-07-12 17:42:19.774030] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.026 [2024-07-12 17:42:19.774042] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.026 [2024-07-12 17:42:19.774051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.026 [2024-07-12 17:42:19.774163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:41.026 [2024-07-12 17:42:19.774263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.026 [2024-07-12 17:42:19.774277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:41.961 17:42:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:41.961 17:42:20 -- common/autotest_common.sh@852 -- # return 0 00:31:41.961 17:42:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:41.961 17:42:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:41.961 17:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:41.961 17:42:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.961 17:42:20 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:41.961 17:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.961 17:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:41.961 [2024-07-12 17:42:20.644034] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.961 17:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.961 17:42:20 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:41.961 17:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.961 17:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:41.961 Malloc0 00:31:41.961 17:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.961 17:42:20 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:41.961 17:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.961 17:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:41.961 17:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.961 17:42:20 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:41.961 17:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.961 17:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:41.961 17:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.961 17:42:20 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.961 17:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.961 17:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:41.961 [2024-07-12 17:42:20.704447] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.961 17:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.961 17:42:20 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:41.961 17:42:20 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:41.961 17:42:20 -- nvmf/common.sh@520 -- # config=() 00:31:41.961 17:42:20 -- nvmf/common.sh@520 -- # local subsystem config 00:31:41.961 17:42:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:41.961 17:42:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:41.961 { 00:31:41.961 "params": { 00:31:41.961 "name": "Nvme$subsystem", 00:31:41.961 "trtype": "$TEST_TRANSPORT", 00:31:41.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.961 "adrfam": "ipv4", 00:31:41.961 "trsvcid": "$NVMF_PORT", 00:31:41.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.961 "hdgst": ${hdgst:-false}, 00:31:41.961 "ddgst": ${ddgst:-false} 00:31:41.961 }, 00:31:41.961 "method": "bdev_nvme_attach_controller" 00:31:41.961 } 00:31:41.961 EOF 00:31:41.961 )") 00:31:41.961 17:42:20 -- nvmf/common.sh@542 -- # cat 00:31:41.961 17:42:20 -- nvmf/common.sh@544 -- # jq . 00:31:41.961 17:42:20 -- nvmf/common.sh@545 -- # IFS=, 00:31:41.961 17:42:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:41.961 "params": { 00:31:41.961 "name": "Nvme1", 00:31:41.961 "trtype": "tcp", 00:31:41.961 "traddr": "10.0.0.2", 00:31:41.961 "adrfam": "ipv4", 00:31:41.961 "trsvcid": "4420", 00:31:41.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:41.961 "hdgst": false, 00:31:41.961 "ddgst": false 00:31:41.961 }, 00:31:41.961 "method": "bdev_nvme_attach_controller" 00:31:41.961 }' 00:31:41.961 [2024-07-12 17:42:20.754369] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:41.961 [2024-07-12 17:42:20.754431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118177 ] 00:31:41.961 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.961 [2024-07-12 17:42:20.834624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.961 [2024-07-12 17:42:20.875939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.220 Running I/O for 1 seconds... 00:31:43.156 00:31:43.156 Latency(us) 00:31:43.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.156 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:43.156 Verification LBA range: start 0x0 length 0x4000 00:31:43.156 Nvme1n1 : 1.01 11433.45 44.66 0.00 0.00 11131.95 1414.98 17158.52 00:31:43.156 =================================================================================================================== 00:31:43.156 Total : 11433.45 44.66 0.00 0.00 11131.95 1414.98 17158.52 00:31:43.415 17:42:22 -- host/bdevperf.sh@30 -- # bdevperfpid=118460 00:31:43.415 17:42:22 -- host/bdevperf.sh@32 -- # sleep 3 00:31:43.415 17:42:22 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:43.415 17:42:22 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:43.415 17:42:22 -- nvmf/common.sh@520 -- # config=() 00:31:43.415 17:42:22 -- nvmf/common.sh@520 -- # local subsystem config 00:31:43.415 17:42:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:43.415 17:42:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:43.415 { 00:31:43.415 "params": { 00:31:43.415 "name": "Nvme$subsystem", 00:31:43.415 "trtype": "$TEST_TRANSPORT", 00:31:43.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.415 "adrfam": "ipv4", 00:31:43.415 "trsvcid": "$NVMF_PORT", 00:31:43.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.415 "hdgst": ${hdgst:-false}, 00:31:43.415 "ddgst": ${ddgst:-false} 00:31:43.415 }, 00:31:43.415 "method": "bdev_nvme_attach_controller" 00:31:43.415 } 00:31:43.415 EOF 00:31:43.415 )") 00:31:43.415 17:42:22 -- nvmf/common.sh@542 -- # cat 00:31:43.415 17:42:22 -- nvmf/common.sh@544 -- # jq . 00:31:43.415 17:42:22 -- nvmf/common.sh@545 -- # IFS=, 00:31:43.415 17:42:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:43.415 "params": { 00:31:43.415 "name": "Nvme1", 00:31:43.415 "trtype": "tcp", 00:31:43.415 "traddr": "10.0.0.2", 00:31:43.415 "adrfam": "ipv4", 00:31:43.415 "trsvcid": "4420", 00:31:43.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:43.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:43.415 "hdgst": false, 00:31:43.415 "ddgst": false 00:31:43.415 }, 00:31:43.416 "method": "bdev_nvme_attach_controller" 00:31:43.416 }' 00:31:43.416 [2024-07-12 17:42:22.308673] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:43.416 [2024-07-12 17:42:22.308730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118460 ] 00:31:43.416 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.675 [2024-07-12 17:42:22.389797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.675 [2024-07-12 17:42:22.430836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.675 Running I/O for 15 seconds... 00:31:46.966 17:42:25 -- host/bdevperf.sh@33 -- # kill -9 118001 00:31:46.966 17:42:25 -- host/bdevperf.sh@35 -- # sleep 3 00:31:46.966 [2024-07-12 17:42:25.277799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.966 [2024-07-12 17:42:25.277843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.966 [2024-07-12 17:42:25.277868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.966 [2024-07-12 17:42:25.277882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.966 [2024-07-12 17:42:25.277897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.966 [2024-07-12 17:42:25.277909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.966 [2024-07-12 17:42:25.277925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.966 [2024-07-12 17:42:25.277936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.966 [2024-07-12 17:42:25.277953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.966 [2024-07-12 17:42:25.277964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.277977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.277992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.967 [2024-07-12 17:42:25.278613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.967 [2024-07-12 17:42:25.278658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.967 [2024-07-12 17:42:25.278747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.967 [2024-07-12 17:42:25.278791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.967 [2024-07-12 17:42:25.278835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.278978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.967 [2024-07-12 17:42:25.278990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.967 [2024-07-12 17:42:25.279002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.968 [2024-07-12 17:42:25.279928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.968 [2024-07-12 17:42:25.279940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.968 [2024-07-12 17:42:25.279950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.279962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.279972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.279984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.279994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.969 [2024-07-12 17:42:25.280623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.969 [2024-07-12 17:42:25.280787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2910 is same with the state(5) to be set 00:31:46.969 [2024-07-12 17:42:25.280810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:46.969 [2024-07-12 17:42:25.280818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:46.969 [2024-07-12 17:42:25.280827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:31:46.969 [2024-07-12 17:42:25.280837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280886] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13a2910 was disconnected and freed. reset controller. 00:31:46.969 [2024-07-12 17:42:25.280937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.969 [2024-07-12 17:42:25.280951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.969 [2024-07-12 17:42:25.280971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.969 [2024-07-12 17:42:25.280981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.969 [2024-07-12 17:42:25.280990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.970 [2024-07-12 17:42:25.281001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.970 [2024-07-12 17:42:25.281011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.970 [2024-07-12 17:42:25.281020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.283917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.283954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.284564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.284740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.284755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.284768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.284945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.285075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.285087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.285097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.287902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.297299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.297763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.298017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.298033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.298044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.298539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.298785] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.298798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.298807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.301315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.310034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.310491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.310726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.310758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.310781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.311320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.311628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.311646] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.311660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.315723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.323720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.324159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.324397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.324431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.324453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.324735] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.324911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.324924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.324933] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.327825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.336813] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.337328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.337552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.337584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.337607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.337830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.338006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.338019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.338029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.340901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.349873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.350358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.350624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.350656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.350678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.351125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.351307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.351321] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.351331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.354214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.362675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.363195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.363508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.363542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.363564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.363834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.364078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.364091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.364100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.366761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.375617] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.376008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.376181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.376197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.376208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.376347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.376522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.376535] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.376545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.379092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.388454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.970 [2024-07-12 17:42:25.388815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.389012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.970 [2024-07-12 17:42:25.389028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.970 [2024-07-12 17:42:25.389039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.970 [2024-07-12 17:42:25.389214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.970 [2024-07-12 17:42:25.389376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.970 [2024-07-12 17:42:25.389389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.970 [2024-07-12 17:42:25.389399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.970 [2024-07-12 17:42:25.392196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.970 [2024-07-12 17:42:25.401401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.401667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.401807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.401823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.401838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.401991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.971 [2024-07-12 17:42:25.402143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.971 [2024-07-12 17:42:25.402156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.971 [2024-07-12 17:42:25.402165] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.971 [2024-07-12 17:42:25.404792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.971 [2024-07-12 17:42:25.414300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.414775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.415047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.415085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.415109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.415367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.971 [2024-07-12 17:42:25.415744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.971 [2024-07-12 17:42:25.415758] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.971 [2024-07-12 17:42:25.415768] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.971 [2024-07-12 17:42:25.418689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.971 [2024-07-12 17:42:25.427179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.427481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.428758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.428786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.428798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.428888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.971 [2024-07-12 17:42:25.429133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.971 [2024-07-12 17:42:25.429145] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.971 [2024-07-12 17:42:25.429154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.971 [2024-07-12 17:42:25.431983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.971 [2024-07-12 17:42:25.440111] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.440544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.440763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.440795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.440818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.441207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.971 [2024-07-12 17:42:25.441508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.971 [2024-07-12 17:42:25.441535] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.971 [2024-07-12 17:42:25.441556] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.971 [2024-07-12 17:42:25.444815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.971 [2024-07-12 17:42:25.452992] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.453429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.453648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.453681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.453703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.454034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.971 [2024-07-12 17:42:25.454221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.971 [2024-07-12 17:42:25.454233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.971 [2024-07-12 17:42:25.454244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.971 [2024-07-12 17:42:25.456999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.971 [2024-07-12 17:42:25.466186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.466653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.466872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.466905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.466928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.467225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.971 [2024-07-12 17:42:25.467407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.971 [2024-07-12 17:42:25.467421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.971 [2024-07-12 17:42:25.467431] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.971 [2024-07-12 17:42:25.470042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.971 [2024-07-12 17:42:25.478978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.479455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.479715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.479746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.479769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.480149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.971 [2024-07-12 17:42:25.480412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.971 [2024-07-12 17:42:25.480426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.971 [2024-07-12 17:42:25.480436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.971 [2024-07-12 17:42:25.483253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.971 [2024-07-12 17:42:25.491774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.492087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.492295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.492312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.492323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.492520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.971 [2024-07-12 17:42:25.492673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.971 [2024-07-12 17:42:25.492685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.971 [2024-07-12 17:42:25.492695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.971 [2024-07-12 17:42:25.495311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.971 [2024-07-12 17:42:25.504696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.971 [2024-07-12 17:42:25.505164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.505421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.971 [2024-07-12 17:42:25.505438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.971 [2024-07-12 17:42:25.505449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.971 [2024-07-12 17:42:25.505624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.505777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.505790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.505800] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.508530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.517651] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.518183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.518340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.518358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.518369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.518567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.518766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.518782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.518792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.521638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.530341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.530801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.531096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.531127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.531150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.531473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.531649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.531661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.531671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.534289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.543370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.543778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.543955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.543971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.543982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.544135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.544339] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.544353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.544363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.547429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.556581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.556982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.557155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.557187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.557209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.557555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.558039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.558064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.558093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.561019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.569629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.570114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.570332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.570367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.570390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.570632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.570785] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.570798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.570807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.573494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.582824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.583146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.583397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.583414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.583424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.583668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.583843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.583855] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.583866] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.586686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.596126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.596534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.596821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.596853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.596875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.597320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.597505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.597517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.597527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.600389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.608916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.609381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.609670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.609702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.609724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.610108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.610353] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.610366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.610376] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.612919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.622207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.622665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.622930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.622962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.622984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.623307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.623483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.972 [2024-07-12 17:42:25.623496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.972 [2024-07-12 17:42:25.623506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.972 [2024-07-12 17:42:25.626300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.972 [2024-07-12 17:42:25.635402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.972 [2024-07-12 17:42:25.635743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.636004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.972 [2024-07-12 17:42:25.636036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.972 [2024-07-12 17:42:25.636058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.972 [2024-07-12 17:42:25.636327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.972 [2024-07-12 17:42:25.636526] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.636539] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.636548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.639112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.648426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.648923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.649117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.649149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.973 [2024-07-12 17:42:25.649171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.973 [2024-07-12 17:42:25.649533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.973 [2024-07-12 17:42:25.649710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.649723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.649732] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.652484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.661540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.661949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.662205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.662240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.973 [2024-07-12 17:42:25.662278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.973 [2024-07-12 17:42:25.662609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.973 [2024-07-12 17:42:25.662848] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.662861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.662871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.665599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.674503] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.674978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.675155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.675171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.973 [2024-07-12 17:42:25.675181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.973 [2024-07-12 17:42:25.675362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.973 [2024-07-12 17:42:25.675562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.675574] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.675583] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.677948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.687535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.687941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.688171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.688202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.973 [2024-07-12 17:42:25.688225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.973 [2024-07-12 17:42:25.688672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.973 [2024-07-12 17:42:25.688947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.688960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.688969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.691718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.700484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.700898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.701160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.701192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.973 [2024-07-12 17:42:25.701214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.973 [2024-07-12 17:42:25.701472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.973 [2024-07-12 17:42:25.701672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.701684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.701694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.704466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.713266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.713615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.713903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.713935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.973 [2024-07-12 17:42:25.713957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.973 [2024-07-12 17:42:25.714403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.973 [2024-07-12 17:42:25.714688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.714713] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.714744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.717529] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.726142] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.726529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.726728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.726760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.973 [2024-07-12 17:42:25.726782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.973 [2024-07-12 17:42:25.727066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.973 [2024-07-12 17:42:25.727510] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.727536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.727557] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.730773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.739116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.739528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.739705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.973 [2024-07-12 17:42:25.739720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.973 [2024-07-12 17:42:25.739731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.973 [2024-07-12 17:42:25.739974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.973 [2024-07-12 17:42:25.740126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.973 [2024-07-12 17:42:25.740138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.973 [2024-07-12 17:42:25.740148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.973 [2024-07-12 17:42:25.742834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.973 [2024-07-12 17:42:25.752174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.973 [2024-07-12 17:42:25.752609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.752802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.752836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.752859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.753239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.753588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.753613] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.753635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.756701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.765212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.765707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.765995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.766026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.766055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.766449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.766783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.766808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.766828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.769675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.778439] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.778958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.779246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.779293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.779315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.779745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.780024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.780037] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.780046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.782931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.791450] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.791902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.792086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.792100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.792110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.792238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.792442] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.792456] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.792466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.795052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.804496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.804990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.805189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.805220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.805243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.805644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.805927] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.805951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.805972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.808772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.817526] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.817965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.818201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.818232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.818268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.818600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.818957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.818970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.818980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.821796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.830556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.831039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.831252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.831298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.831320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.831700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.832163] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.832176] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.832186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.835087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.843705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.844228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.844473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.844506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.844529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.844818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.845078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.845091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.845101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.847742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.856550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.857023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.857321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.857356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.857379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.857709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.974 [2024-07-12 17:42:25.858062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.974 [2024-07-12 17:42:25.858075] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.974 [2024-07-12 17:42:25.858085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.974 [2024-07-12 17:42:25.860813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.974 [2024-07-12 17:42:25.869235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.974 [2024-07-12 17:42:25.869694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.869947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.974 [2024-07-12 17:42:25.869979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.974 [2024-07-12 17:42:25.870002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.974 [2024-07-12 17:42:25.870498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.975 [2024-07-12 17:42:25.870933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.975 [2024-07-12 17:42:25.870957] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.975 [2024-07-12 17:42:25.870978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.975 [2024-07-12 17:42:25.873646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.975 [2024-07-12 17:42:25.882073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.975 [2024-07-12 17:42:25.882530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.975 [2024-07-12 17:42:25.882786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.975 [2024-07-12 17:42:25.882818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.975 [2024-07-12 17:42:25.882840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.975 [2024-07-12 17:42:25.883074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.975 [2024-07-12 17:42:25.883253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.975 [2024-07-12 17:42:25.883273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.975 [2024-07-12 17:42:25.883283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.975 [2024-07-12 17:42:25.885982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.975 [2024-07-12 17:42:25.894997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.975 [2024-07-12 17:42:25.895446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.975 [2024-07-12 17:42:25.895699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.975 [2024-07-12 17:42:25.895714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.975 [2024-07-12 17:42:25.895726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.975 [2024-07-12 17:42:25.895924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.975 [2024-07-12 17:42:25.896100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.975 [2024-07-12 17:42:25.896113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.975 [2024-07-12 17:42:25.896123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.975 [2024-07-12 17:42:25.898877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.975 [2024-07-12 17:42:25.908093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.975 [2024-07-12 17:42:25.908522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.975 [2024-07-12 17:42:25.908696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.975 [2024-07-12 17:42:25.908712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.975 [2024-07-12 17:42:25.908722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.975 [2024-07-12 17:42:25.908897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.975 [2024-07-12 17:42:25.909118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.975 [2024-07-12 17:42:25.909131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.975 [2024-07-12 17:42:25.909141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.975 [2024-07-12 17:42:25.911959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:46.975 [2024-07-12 17:42:25.920846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.975 [2024-07-12 17:42:25.921311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.975 [2024-07-12 17:42:25.921484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.975 [2024-07-12 17:42:25.921500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:46.975 [2024-07-12 17:42:25.921511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:46.975 [2024-07-12 17:42:25.921640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:46.975 [2024-07-12 17:42:25.921814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:46.975 [2024-07-12 17:42:25.921827] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:46.975 [2024-07-12 17:42:25.921841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.975 [2024-07-12 17:42:25.924594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.236 [2024-07-12 17:42:25.933890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.236 [2024-07-12 17:42:25.934325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.934550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.934565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.236 [2024-07-12 17:42:25.934576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.236 [2024-07-12 17:42:25.934729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.236 [2024-07-12 17:42:25.934904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.236 [2024-07-12 17:42:25.934916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.236 [2024-07-12 17:42:25.934926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.236 [2024-07-12 17:42:25.937541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.236 [2024-07-12 17:42:25.946916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.236 [2024-07-12 17:42:25.947318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.947546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.947561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.236 [2024-07-12 17:42:25.947572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.236 [2024-07-12 17:42:25.947815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.236 [2024-07-12 17:42:25.947969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.236 [2024-07-12 17:42:25.947981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.236 [2024-07-12 17:42:25.947991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.236 [2024-07-12 17:42:25.950589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.236 [2024-07-12 17:42:25.959897] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.236 [2024-07-12 17:42:25.960262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.960435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.960451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.236 [2024-07-12 17:42:25.960462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.236 [2024-07-12 17:42:25.960637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.236 [2024-07-12 17:42:25.960790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.236 [2024-07-12 17:42:25.960803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.236 [2024-07-12 17:42:25.960816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.236 [2024-07-12 17:42:25.963296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.236 [2024-07-12 17:42:25.973036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.236 [2024-07-12 17:42:25.973417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.973672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.973704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.236 [2024-07-12 17:42:25.973727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.236 [2024-07-12 17:42:25.974007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.236 [2024-07-12 17:42:25.974227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.236 [2024-07-12 17:42:25.974240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.236 [2024-07-12 17:42:25.974250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.236 [2024-07-12 17:42:25.977119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.236 [2024-07-12 17:42:25.986176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.236 [2024-07-12 17:42:25.986627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.986888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:25.986920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.236 [2024-07-12 17:42:25.986942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.236 [2024-07-12 17:42:25.987285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.236 [2024-07-12 17:42:25.987524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.236 [2024-07-12 17:42:25.987537] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.236 [2024-07-12 17:42:25.987546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.236 [2024-07-12 17:42:25.990276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.236 [2024-07-12 17:42:25.999288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.236 [2024-07-12 17:42:25.999697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:26.000013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:26.000045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.236 [2024-07-12 17:42:26.000067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.236 [2024-07-12 17:42:26.000322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.236 [2024-07-12 17:42:26.000499] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.236 [2024-07-12 17:42:26.000512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.236 [2024-07-12 17:42:26.000521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.236 [2024-07-12 17:42:26.003132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.236 [2024-07-12 17:42:26.012150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.236 [2024-07-12 17:42:26.012587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:26.012813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:26.012829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.236 [2024-07-12 17:42:26.012840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.236 [2024-07-12 17:42:26.013015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.236 [2024-07-12 17:42:26.013213] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.236 [2024-07-12 17:42:26.013225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.236 [2024-07-12 17:42:26.013236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.236 [2024-07-12 17:42:26.015908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.236 [2024-07-12 17:42:26.024926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.236 [2024-07-12 17:42:26.025404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:26.025658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.236 [2024-07-12 17:42:26.025674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.025684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.025859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.026012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.026024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.026034] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.028881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.038039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.038387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.038618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.038650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.038672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.039002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.039417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.039436] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.039450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.043390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.051522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.051930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.052194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.052226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.052248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.052596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.052815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.052828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.052838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.055339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.064454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.064920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.065133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.065166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.065189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.065506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.065705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.065718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.065728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.068433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.077648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.078119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.078409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.078444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.078467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.078801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.079022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.079035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.079045] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.081889] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.090635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.091044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.091334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.091369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.091392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.091697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.091895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.091907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.091917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.094535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.103601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.104011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.104323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.104357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.104380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.104684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.104815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.104827] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.104837] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.107408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.116660] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.117045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.117331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.117370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.117381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.117556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.117755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.117768] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.117777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.120391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.129730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.130223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.130497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.130529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.130558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.237 [2024-07-12 17:42:26.130930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.237 [2024-07-12 17:42:26.131128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.237 [2024-07-12 17:42:26.131141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.237 [2024-07-12 17:42:26.131150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.237 [2024-07-12 17:42:26.133770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.237 [2024-07-12 17:42:26.142919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.237 [2024-07-12 17:42:26.143249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.143523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.237 [2024-07-12 17:42:26.143554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.237 [2024-07-12 17:42:26.143576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.238 [2024-07-12 17:42:26.143960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.238 [2024-07-12 17:42:26.144405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.238 [2024-07-12 17:42:26.144432] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.238 [2024-07-12 17:42:26.144453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.238 [2024-07-12 17:42:26.147369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.238 [2024-07-12 17:42:26.156083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.238 [2024-07-12 17:42:26.156497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.238 [2024-07-12 17:42:26.156750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.238 [2024-07-12 17:42:26.156782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.238 [2024-07-12 17:42:26.156805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.238 [2024-07-12 17:42:26.157187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.238 [2024-07-12 17:42:26.157605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.238 [2024-07-12 17:42:26.157619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.238 [2024-07-12 17:42:26.157629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.238 [2024-07-12 17:42:26.160514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.238 [2024-07-12 17:42:26.169090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.238 [2024-07-12 17:42:26.169526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.238 [2024-07-12 17:42:26.169786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.238 [2024-07-12 17:42:26.169817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.238 [2024-07-12 17:42:26.169846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.238 [2024-07-12 17:42:26.170228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.238 [2024-07-12 17:42:26.170675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.238 [2024-07-12 17:42:26.170701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.238 [2024-07-12 17:42:26.170722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.238 [2024-07-12 17:42:26.174593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.238 [2024-07-12 17:42:26.182222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.238 [2024-07-12 17:42:26.182690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.238 [2024-07-12 17:42:26.182907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.238 [2024-07-12 17:42:26.182938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.238 [2024-07-12 17:42:26.182959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.238 [2024-07-12 17:42:26.183499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.238 [2024-07-12 17:42:26.183766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.238 [2024-07-12 17:42:26.183779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.238 [2024-07-12 17:42:26.183788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.238 [2024-07-12 17:42:26.186425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.238 [2024-07-12 17:42:26.195527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.238 [2024-07-12 17:42:26.196031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.238 [2024-07-12 17:42:26.196233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.238 [2024-07-12 17:42:26.196278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.238 [2024-07-12 17:42:26.196302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.238 [2024-07-12 17:42:26.196581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.238 [2024-07-12 17:42:26.196945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.238 [2024-07-12 17:42:26.196958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.238 [2024-07-12 17:42:26.196967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.238 [2024-07-12 17:42:26.199625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.498 [2024-07-12 17:42:26.208574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.498 [2024-07-12 17:42:26.208978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.498 [2024-07-12 17:42:26.209134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.498 [2024-07-12 17:42:26.209165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.498 [2024-07-12 17:42:26.209187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.498 [2024-07-12 17:42:26.209540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.498 [2024-07-12 17:42:26.209885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.498 [2024-07-12 17:42:26.209897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.498 [2024-07-12 17:42:26.209907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.498 [2024-07-12 17:42:26.212587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.498 [2024-07-12 17:42:26.221654] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.498 [2024-07-12 17:42:26.222061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.222228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.222244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.222261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.222437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.222681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.222693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.222702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.225476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.234904] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.235312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.235483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.235499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.235509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.235638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.235881] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.235893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.235903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.238614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.247950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.248380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.248653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.248685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.248706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.249039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.249242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.249262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.249273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.252107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.261141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.261582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.261845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.261876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.261899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.262185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.262299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.262312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.262322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.264912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.274152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.274584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.274810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.274826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.274836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.275034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.275187] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.275200] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.275210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.277938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.286997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.287405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.287678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.287708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.287730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.288061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.288294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.288311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.288321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.290930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.300058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.300537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.300733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.300749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.300759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.300889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.300994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.301007] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.301017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.303680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.313124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.313531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.313687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.313703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.313713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.313890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.314043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.314055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.314065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.316851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.326131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.326692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.326888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.326920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.326943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.327236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.327420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.327433] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.327448] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.330148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.339100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.339533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.339762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.499 [2024-07-12 17:42:26.339778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.499 [2024-07-12 17:42:26.339788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.499 [2024-07-12 17:42:26.339896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.499 [2024-07-12 17:42:26.340048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.499 [2024-07-12 17:42:26.340060] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.499 [2024-07-12 17:42:26.340070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.499 [2024-07-12 17:42:26.342780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.499 [2024-07-12 17:42:26.352113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.499 [2024-07-12 17:42:26.352525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.352781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.352813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.352834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.353165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.353449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.353463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.353473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.356244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.500 [2024-07-12 17:42:26.365143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.500 [2024-07-12 17:42:26.365489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.365737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.365767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.365791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.366221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.366540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.366554] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.366564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.369173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.500 [2024-07-12 17:42:26.377991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.500 [2024-07-12 17:42:26.378442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.378737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.378769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.378792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.379172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.379484] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.379498] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.379507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.382166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.500 [2024-07-12 17:42:26.391092] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.500 [2024-07-12 17:42:26.391583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.391848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.391879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.391901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.392297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.392551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.392564] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.392573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.395369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.500 [2024-07-12 17:42:26.403901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.500 [2024-07-12 17:42:26.404343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.404630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.404661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.404683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.404967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.405311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.405324] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.405334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.408172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.500 [2024-07-12 17:42:26.416902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.500 [2024-07-12 17:42:26.417392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.417677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.417709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.417731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.418114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.418353] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.418366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.418377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.421078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.500 [2024-07-12 17:42:26.429615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.500 [2024-07-12 17:42:26.429964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.430159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.430190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.430212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.430583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.430783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.430795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.430805] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.433331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.500 [2024-07-12 17:42:26.442699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.500 [2024-07-12 17:42:26.443131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.443414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.443447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.443470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.443900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.444188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.444206] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.444219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.448027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.500 [2024-07-12 17:42:26.456177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.500 [2024-07-12 17:42:26.456629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.456921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.500 [2024-07-12 17:42:26.456952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.500 [2024-07-12 17:42:26.456974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.500 [2024-07-12 17:42:26.457369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.500 [2024-07-12 17:42:26.457665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.500 [2024-07-12 17:42:26.457678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.500 [2024-07-12 17:42:26.457688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.500 [2024-07-12 17:42:26.460415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.760 [2024-07-12 17:42:26.469502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.760 [2024-07-12 17:42:26.469961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.760 [2024-07-12 17:42:26.470221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.760 [2024-07-12 17:42:26.470252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.760 [2024-07-12 17:42:26.470292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.760 [2024-07-12 17:42:26.470500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.760 [2024-07-12 17:42:26.470676] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.760 [2024-07-12 17:42:26.470688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.760 [2024-07-12 17:42:26.470698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.760 [2024-07-12 17:42:26.473608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.760 [2024-07-12 17:42:26.482500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.760 [2024-07-12 17:42:26.482926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.760 [2024-07-12 17:42:26.483176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.760 [2024-07-12 17:42:26.483208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.760 [2024-07-12 17:42:26.483231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.760 [2024-07-12 17:42:26.483676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.760 [2024-07-12 17:42:26.483910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.760 [2024-07-12 17:42:26.483923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.760 [2024-07-12 17:42:26.483932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.760 [2024-07-12 17:42:26.486905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.760 [2024-07-12 17:42:26.495497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.760 [2024-07-12 17:42:26.495903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.760 [2024-07-12 17:42:26.496107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.760 [2024-07-12 17:42:26.496144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.760 [2024-07-12 17:42:26.496167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.760 [2024-07-12 17:42:26.496566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.760 [2024-07-12 17:42:26.496749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.496761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.496771] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.499525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.508353] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.508734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.508845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.508861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.508872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.509092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.509319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.509333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.509343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.512047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.521023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.521508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.521625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.521640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.521651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.521848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.522001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.522014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.522024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.524713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.534198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.534631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.534858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.534874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.534889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.535110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.535291] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.535305] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.535316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.537971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.547222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.547666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.547928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.547958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.547981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.548472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.548921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.548934] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.548943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.551774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.560308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.560782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.560988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.561021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.561043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.561336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.561770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.561795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.561816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.564245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.573250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.573647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.573871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.573902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.573924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.574212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.574611] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.574638] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.574660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.577545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.586251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.586652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.586861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.586892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.586914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.587147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.587545] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.587572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.587593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.590310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.599356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.599714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.599946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.599977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.599999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.600394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.600725] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.600738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.600748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.603432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.612368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.612873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.613028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.613060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.613081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.613475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.613873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.613898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.761 [2024-07-12 17:42:26.613920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.761 [2024-07-12 17:42:26.616581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.761 [2024-07-12 17:42:26.625366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.761 [2024-07-12 17:42:26.625848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.626079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.761 [2024-07-12 17:42:26.626111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.761 [2024-07-12 17:42:26.626133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.761 [2024-07-12 17:42:26.626531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.761 [2024-07-12 17:42:26.626865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.761 [2024-07-12 17:42:26.626889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.762 [2024-07-12 17:42:26.626911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.762 [2024-07-12 17:42:26.629569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.762 [2024-07-12 17:42:26.638322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.762 [2024-07-12 17:42:26.638608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.638745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.638777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.762 [2024-07-12 17:42:26.638799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.762 [2024-07-12 17:42:26.639029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.762 [2024-07-12 17:42:26.639379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.762 [2024-07-12 17:42:26.639405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.762 [2024-07-12 17:42:26.639428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.762 [2024-07-12 17:42:26.642499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.762 [2024-07-12 17:42:26.651363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.762 [2024-07-12 17:42:26.651590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.651713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.651728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.762 [2024-07-12 17:42:26.651739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.762 [2024-07-12 17:42:26.651892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.762 [2024-07-12 17:42:26.652043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.762 [2024-07-12 17:42:26.652060] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.762 [2024-07-12 17:42:26.652070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.762 [2024-07-12 17:42:26.654663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.762 [2024-07-12 17:42:26.664234] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.762 [2024-07-12 17:42:26.664532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.664705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.664721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.762 [2024-07-12 17:42:26.664732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.762 [2024-07-12 17:42:26.664885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.762 [2024-07-12 17:42:26.665037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.762 [2024-07-12 17:42:26.665049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.762 [2024-07-12 17:42:26.665059] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.762 [2024-07-12 17:42:26.667883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.762 [2024-07-12 17:42:26.677021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.762 [2024-07-12 17:42:26.677352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.677536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.677568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.762 [2024-07-12 17:42:26.677590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.762 [2024-07-12 17:42:26.678034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.762 [2024-07-12 17:42:26.678261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.762 [2024-07-12 17:42:26.678275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.762 [2024-07-12 17:42:26.678285] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.762 [2024-07-12 17:42:26.680807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.762 [2024-07-12 17:42:26.690144] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.762 [2024-07-12 17:42:26.690466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.690641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.690673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.762 [2024-07-12 17:42:26.690695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.762 [2024-07-12 17:42:26.691026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.762 [2024-07-12 17:42:26.691312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.762 [2024-07-12 17:42:26.691326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.762 [2024-07-12 17:42:26.691339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.762 [2024-07-12 17:42:26.693908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.762 [2024-07-12 17:42:26.703249] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.762 [2024-07-12 17:42:26.703688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.703808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.703824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.762 [2024-07-12 17:42:26.703834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.762 [2024-07-12 17:42:26.703985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.762 [2024-07-12 17:42:26.704160] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.762 [2024-07-12 17:42:26.704173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.762 [2024-07-12 17:42:26.704183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.762 [2024-07-12 17:42:26.706981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:47.762 [2024-07-12 17:42:26.716122] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.762 [2024-07-12 17:42:26.716507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.717927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.762 [2024-07-12 17:42:26.717963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:47.762 [2024-07-12 17:42:26.717979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:47.762 [2024-07-12 17:42:26.718284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:47.762 [2024-07-12 17:42:26.718542] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.762 [2024-07-12 17:42:26.718560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:47.762 [2024-07-12 17:42:26.718573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.762 [2024-07-12 17:42:26.722647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.022 [2024-07-12 17:42:26.729461] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.022 [2024-07-12 17:42:26.729873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.022 [2024-07-12 17:42:26.730038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.022 [2024-07-12 17:42:26.730070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.022 [2024-07-12 17:42:26.730092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.022 [2024-07-12 17:42:26.730538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.022 [2024-07-12 17:42:26.730823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.022 [2024-07-12 17:42:26.730848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.022 [2024-07-12 17:42:26.730871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.022 [2024-07-12 17:42:26.733845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.022 [2024-07-12 17:42:26.742335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.022 [2024-07-12 17:42:26.742593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.022 [2024-07-12 17:42:26.742714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.022 [2024-07-12 17:42:26.742729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.022 [2024-07-12 17:42:26.742740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.022 [2024-07-12 17:42:26.742894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.022 [2024-07-12 17:42:26.743047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.022 [2024-07-12 17:42:26.743060] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.022 [2024-07-12 17:42:26.743070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.022 [2024-07-12 17:42:26.745807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.022 [2024-07-12 17:42:26.755338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.022 [2024-07-12 17:42:26.755845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.022 [2024-07-12 17:42:26.756100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.022 [2024-07-12 17:42:26.756116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.022 [2024-07-12 17:42:26.756126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.022 [2024-07-12 17:42:26.756331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.022 [2024-07-12 17:42:26.756553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.022 [2024-07-12 17:42:26.756565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.022 [2024-07-12 17:42:26.756575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.022 [2024-07-12 17:42:26.759401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.022 [2024-07-12 17:42:26.768295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.022 [2024-07-12 17:42:26.768728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.768905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.768921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.768931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.769106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.769333] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.769347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.769357] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.772061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.781520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.781849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.782105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.782121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.782132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.782337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.782559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.782572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.782582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.785492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.794672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.795103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.795363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.795380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.795391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.795613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.795788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.795801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.795811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.798276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.807824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.808232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.808495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.808512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.808522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.808722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.808876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.808889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.808899] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.811428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.820675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.821105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.821363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.821381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.821392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.821569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.821745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.821757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.821768] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.824524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.833527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.833959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.834263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.834283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.834295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.834473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.834649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.834661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.834671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.837474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.846527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.846942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.847099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.847131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.847154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.847500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.847656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.847669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.847679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.850341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.859712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.860141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.860338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.860385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.860408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.860692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.861024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.861049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.861071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.864071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.872829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.873153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.873325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.873342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.873352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.873505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.873636] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.873649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.873659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.876279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.885821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.023 [2024-07-12 17:42:26.886170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.886365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.023 [2024-07-12 17:42:26.886384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.023 [2024-07-12 17:42:26.886394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.023 [2024-07-12 17:42:26.886594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.023 [2024-07-12 17:42:26.886792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.023 [2024-07-12 17:42:26.886805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.023 [2024-07-12 17:42:26.886815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.023 [2024-07-12 17:42:26.889503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.023 [2024-07-12 17:42:26.898847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.024 [2024-07-12 17:42:26.899176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.899377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.899394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.024 [2024-07-12 17:42:26.899409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.024 [2024-07-12 17:42:26.899674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.024 [2024-07-12 17:42:26.899872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.024 [2024-07-12 17:42:26.899885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.024 [2024-07-12 17:42:26.899896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.024 [2024-07-12 17:42:26.902562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.024 [2024-07-12 17:42:26.911699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.024 [2024-07-12 17:42:26.912131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.912385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.912402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.024 [2024-07-12 17:42:26.912412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.024 [2024-07-12 17:42:26.912610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.024 [2024-07-12 17:42:26.912740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.024 [2024-07-12 17:42:26.912753] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.024 [2024-07-12 17:42:26.912763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.024 [2024-07-12 17:42:26.915549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.024 [2024-07-12 17:42:26.924645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.024 [2024-07-12 17:42:26.925025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.925278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.925294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.024 [2024-07-12 17:42:26.925304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.024 [2024-07-12 17:42:26.925479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.024 [2024-07-12 17:42:26.925678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.024 [2024-07-12 17:42:26.925690] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.024 [2024-07-12 17:42:26.925700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.024 [2024-07-12 17:42:26.928364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.024 [2024-07-12 17:42:26.937661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.024 [2024-07-12 17:42:26.938046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.938226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.938242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.024 [2024-07-12 17:42:26.938252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.024 [2024-07-12 17:42:26.938370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.024 [2024-07-12 17:42:26.938544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.024 [2024-07-12 17:42:26.938557] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.024 [2024-07-12 17:42:26.938567] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.024 [2024-07-12 17:42:26.941384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.024 [2024-07-12 17:42:26.950928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.024 [2024-07-12 17:42:26.951384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.951539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.951555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.024 [2024-07-12 17:42:26.951566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.024 [2024-07-12 17:42:26.951742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.024 [2024-07-12 17:42:26.951895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.024 [2024-07-12 17:42:26.951907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.024 [2024-07-12 17:42:26.951917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.024 [2024-07-12 17:42:26.954695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.024 [2024-07-12 17:42:26.963987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.024 [2024-07-12 17:42:26.964443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.964698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.964713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.024 [2024-07-12 17:42:26.964724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.024 [2024-07-12 17:42:26.964945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.024 [2024-07-12 17:42:26.965144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.024 [2024-07-12 17:42:26.965156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.024 [2024-07-12 17:42:26.965166] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.024 [2024-07-12 17:42:26.967918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.024 [2024-07-12 17:42:26.976991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.024 [2024-07-12 17:42:26.977491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.977648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.024 [2024-07-12 17:42:26.977664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.024 [2024-07-12 17:42:26.977674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.024 [2024-07-12 17:42:26.977873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.024 [2024-07-12 17:42:26.978075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.024 [2024-07-12 17:42:26.978088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.024 [2024-07-12 17:42:26.978098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.024 [2024-07-12 17:42:26.980647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:26.990368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:26.990751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:26.990981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:26.990996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:26.991006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:26.991136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:26.991294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:26.991307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:26.991317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:26.994068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:27.003295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:27.003693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.003916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.003931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:27.003942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:27.004094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:27.004297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:27.004310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:27.004321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:27.007001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:27.016208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:27.016648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.016820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.016835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:27.016846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:27.016977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:27.017152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:27.017168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:27.017178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:27.019685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:27.029069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:27.029531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.029743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.029774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:27.029796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:27.030115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:27.030343] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:27.030356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:27.030366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:27.033271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:27.042140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:27.042504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.042670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.042685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:27.042695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:27.042915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:27.043090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:27.043103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:27.043113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:27.045841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:27.054969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:27.055392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.055617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.055632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:27.055642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:27.055864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:27.056039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:27.056052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:27.056065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:27.058797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:27.067861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:27.068289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.068479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.068510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:27.068533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:27.068815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:27.069119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:27.069133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:27.069143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:27.072034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:27.080805] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:27.081129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.081352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.081387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:27.081410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:27.081686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:27.081909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:27.081926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:27.081940] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:27.085849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.286 [2024-07-12 17:42:27.094518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.286 [2024-07-12 17:42:27.094948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.095200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.286 [2024-07-12 17:42:27.095233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.286 [2024-07-12 17:42:27.095271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.286 [2024-07-12 17:42:27.095753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.286 [2024-07-12 17:42:27.096028] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.286 [2024-07-12 17:42:27.096041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.286 [2024-07-12 17:42:27.096051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.286 [2024-07-12 17:42:27.098740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.107508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.107923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.108126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.108157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.108180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.108525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.108842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.108855] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.108865] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.111708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.120636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.121038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.121268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.121285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.121296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.121494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.121693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.121706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.121715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.124424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.133604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.134034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.134329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.134373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.134398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.134661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.134836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.134849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.134859] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.137723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.146473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.146938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.147228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.147274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.147298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.147581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.147757] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.147769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.147779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.150484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.159728] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.160160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.160360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.160386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.160398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.160553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.160749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.160762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.160772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.163617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.172594] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.173100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.173389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.173423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.173445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.173687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.173840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.173853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.173863] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.176570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.185817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.186320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.186580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.186612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.186635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.186916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.187131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.187144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.187154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.190002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.198663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.199135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.199431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.199465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.199487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.199769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.199922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.199935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.199945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.202679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.211569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.212072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.212326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.212343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.212354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.212574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.212795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.212808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.212817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.215646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.224509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.224962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.225241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.225295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.225319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.225799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.226181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.226205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.226226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.229170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.237530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.237947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.238278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.238310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.238332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.238573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.238795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.238807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.238818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.287 [2024-07-12 17:42:27.241565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.287 [2024-07-12 17:42:27.250656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.287 [2024-07-12 17:42:27.251150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.251457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.287 [2024-07-12 17:42:27.251490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.287 [2024-07-12 17:42:27.251512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.287 [2024-07-12 17:42:27.251944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.287 [2024-07-12 17:42:27.252225] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.287 [2024-07-12 17:42:27.252249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.287 [2024-07-12 17:42:27.252285] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.255202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.263723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.264165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.264419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.264452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.264481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.264852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.265108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.265125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.549 [2024-07-12 17:42:27.265139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.269211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.276945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.277385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.277594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.277625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.277648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.277978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.278424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.278450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.549 [2024-07-12 17:42:27.278472] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.281129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.290148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.290600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.290889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.290920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.290942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.291339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.291614] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.291627] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.549 [2024-07-12 17:42:27.291637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.294387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.303009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.303441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.303625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.303656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.303678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.304022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.304418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.304444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.549 [2024-07-12 17:42:27.304466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.307422] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.316109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.316497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.316704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.316719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.316729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.316950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.317103] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.317115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.549 [2024-07-12 17:42:27.317125] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.319722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.328948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.329334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.329508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.329523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.329534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.329687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.329864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.329877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.549 [2024-07-12 17:42:27.329886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.332731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.341719] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.342053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.342305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.342322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.342332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.342462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.342618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.342630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.549 [2024-07-12 17:42:27.342640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.345321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.354680] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.355055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.355316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.355350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.355373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.355796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.356052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.356070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.549 [2024-07-12 17:42:27.356084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.549 [2024-07-12 17:42:27.360162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.549 [2024-07-12 17:42:27.368257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.549 [2024-07-12 17:42:27.368800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.369142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.549 [2024-07-12 17:42:27.369173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.549 [2024-07-12 17:42:27.369195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.549 [2024-07-12 17:42:27.369595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.549 [2024-07-12 17:42:27.369852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.549 [2024-07-12 17:42:27.369866] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.369876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.372421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.381273] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.381668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.381939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.381970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.381991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.382336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.382771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.382804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.382824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.385494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.394304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.394610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.394841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.394871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.394892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.395175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.395572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.395598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.395625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.398511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.407130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.407618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.407907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.407938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.407960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.408205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.408367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.408380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.408390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.411001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.420191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.420624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.420863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.420879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.420890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.421064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.421217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.421229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.421243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.423817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.433168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.433650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.433846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.433862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.433872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.434048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.434246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.434267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.434278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.437299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.446220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.446677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.446868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.446899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.446921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.447318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.447754] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.447779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.447799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.450603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.459226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.459686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.460000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.460032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.460053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.460341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.460472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.460485] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.460495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.463180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.472385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.472868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.473049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.473081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.473104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.473398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.473771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.473784] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.473794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.476566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.485332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.485788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.485990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.486021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.486043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.486390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.486823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.486848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.550 [2024-07-12 17:42:27.486868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.550 [2024-07-12 17:42:27.489697] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.550 [2024-07-12 17:42:27.498219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.550 [2024-07-12 17:42:27.498676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.498990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.550 [2024-07-12 17:42:27.499021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.550 [2024-07-12 17:42:27.499043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.550 [2024-07-12 17:42:27.499488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.550 [2024-07-12 17:42:27.499883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.550 [2024-07-12 17:42:27.499896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.551 [2024-07-12 17:42:27.499906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.551 [2024-07-12 17:42:27.502587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.551 [2024-07-12 17:42:27.511147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.551 [2024-07-12 17:42:27.511588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.551 [2024-07-12 17:42:27.511844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.551 [2024-07-12 17:42:27.511876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.551 [2024-07-12 17:42:27.511898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.551 [2024-07-12 17:42:27.512128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.551 [2024-07-12 17:42:27.512388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.551 [2024-07-12 17:42:27.512402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.551 [2024-07-12 17:42:27.512413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.551 [2024-07-12 17:42:27.515119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.811 [2024-07-12 17:42:27.523806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.811 [2024-07-12 17:42:27.524271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.811 [2024-07-12 17:42:27.524561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.811 [2024-07-12 17:42:27.524594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.811 [2024-07-12 17:42:27.524616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.811 [2024-07-12 17:42:27.525047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.811 [2024-07-12 17:42:27.525271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.811 [2024-07-12 17:42:27.525285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.811 [2024-07-12 17:42:27.525295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.811 [2024-07-12 17:42:27.528129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.811 [2024-07-12 17:42:27.536870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.811 [2024-07-12 17:42:27.537334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.811 [2024-07-12 17:42:27.537597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.811 [2024-07-12 17:42:27.537629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.811 [2024-07-12 17:42:27.537652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.811 [2024-07-12 17:42:27.537934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.811 [2024-07-12 17:42:27.538095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.811 [2024-07-12 17:42:27.538108] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.811 [2024-07-12 17:42:27.538118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.811 [2024-07-12 17:42:27.541764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.811 [2024-07-12 17:42:27.550193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.811 [2024-07-12 17:42:27.550695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.811 [2024-07-12 17:42:27.550938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.550970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.550992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.551339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.551490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.551503] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.551513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.554330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.563107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.563494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.563726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.563758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.563781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.564061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.564347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.564360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.564370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.567234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.576012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.576521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.576774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.576790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.576800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.576953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.577060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.577073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.577083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.579839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.589127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.589585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.589691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.589711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.589722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.589874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.590071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.590083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.590093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.592711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.602023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.602461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.602697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.602728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.602750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.603229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.603461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.603475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.603485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.606232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.615156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.615688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.615891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.615922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.615944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.616402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.616601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.616614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.616624] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.619261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.628089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.628500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.628773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.628804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.628835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.629137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.629344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.629358] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.629368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.633141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.641630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.642130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.642370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.642388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.642398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.642617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.642770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.642783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.642793] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.645792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.654696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.655191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.655535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.655569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.655591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.655973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.656307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.656320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.656330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.659097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.667679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.812 [2024-07-12 17:42:27.668113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.668322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.812 [2024-07-12 17:42:27.668356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.812 [2024-07-12 17:42:27.668378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.812 [2024-07-12 17:42:27.668726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.812 [2024-07-12 17:42:27.668924] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.812 [2024-07-12 17:42:27.668937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.812 [2024-07-12 17:42:27.668947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.812 [2024-07-12 17:42:27.671606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.812 [2024-07-12 17:42:27.680708] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.813 [2024-07-12 17:42:27.681181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.681361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.681377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.813 [2024-07-12 17:42:27.681388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.813 [2024-07-12 17:42:27.681542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.813 [2024-07-12 17:42:27.681650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.813 [2024-07-12 17:42:27.681662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.813 [2024-07-12 17:42:27.681672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.813 [2024-07-12 17:42:27.684447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.813 [2024-07-12 17:42:27.693641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.813 [2024-07-12 17:42:27.694115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.694424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.694457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.813 [2024-07-12 17:42:27.694480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.813 [2024-07-12 17:42:27.694770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.813 [2024-07-12 17:42:27.694946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.813 [2024-07-12 17:42:27.694958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.813 [2024-07-12 17:42:27.694967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.813 [2024-07-12 17:42:27.697718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.813 [2024-07-12 17:42:27.706703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.813 [2024-07-12 17:42:27.707189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.707482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.707516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.813 [2024-07-12 17:42:27.707537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.813 [2024-07-12 17:42:27.707702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.813 [2024-07-12 17:42:27.707904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.813 [2024-07-12 17:42:27.707917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.813 [2024-07-12 17:42:27.707927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.813 [2024-07-12 17:42:27.710632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.813 [2024-07-12 17:42:27.719690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.813 [2024-07-12 17:42:27.720126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.720366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.720400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.813 [2024-07-12 17:42:27.720422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.813 [2024-07-12 17:42:27.720817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.813 [2024-07-12 17:42:27.721017] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.813 [2024-07-12 17:42:27.721029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.813 [2024-07-12 17:42:27.721039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.813 [2024-07-12 17:42:27.723722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.813 [2024-07-12 17:42:27.732796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.813 [2024-07-12 17:42:27.733273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.733583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.733615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.813 [2024-07-12 17:42:27.733637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.813 [2024-07-12 17:42:27.733789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.813 [2024-07-12 17:42:27.734010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.813 [2024-07-12 17:42:27.734022] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.813 [2024-07-12 17:42:27.734032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.813 [2024-07-12 17:42:27.736856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.813 [2024-07-12 17:42:27.745978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.813 [2024-07-12 17:42:27.746411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.746668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.746699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.813 [2024-07-12 17:42:27.746721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.813 [2024-07-12 17:42:27.747003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.813 [2024-07-12 17:42:27.747400] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.813 [2024-07-12 17:42:27.747418] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.813 [2024-07-12 17:42:27.747428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.813 [2024-07-12 17:42:27.750454] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.813 [2024-07-12 17:42:27.758985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.813 [2024-07-12 17:42:27.759410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.759694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.759726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.813 [2024-07-12 17:42:27.759749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.813 [2024-07-12 17:42:27.760180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.813 [2024-07-12 17:42:27.760626] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.813 [2024-07-12 17:42:27.760653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.813 [2024-07-12 17:42:27.760674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.813 [2024-07-12 17:42:27.764293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.813 [2024-07-12 17:42:27.772566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:48.813 [2024-07-12 17:42:27.773000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.773250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.813 [2024-07-12 17:42:27.773273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:48.813 [2024-07-12 17:42:27.773284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:48.813 [2024-07-12 17:42:27.773482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:48.813 [2024-07-12 17:42:27.773680] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.813 [2024-07-12 17:42:27.773693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:48.813 [2024-07-12 17:42:27.773702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.813 [2024-07-12 17:42:27.776225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.074 [2024-07-12 17:42:27.785645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.074 [2024-07-12 17:42:27.786101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.786413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.786446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.074 [2024-07-12 17:42:27.786469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.074 [2024-07-12 17:42:27.786800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.074 [2024-07-12 17:42:27.787025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.074 [2024-07-12 17:42:27.787038] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.074 [2024-07-12 17:42:27.787052] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.074 [2024-07-12 17:42:27.789673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.074 [2024-07-12 17:42:27.798666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.074 [2024-07-12 17:42:27.799119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.799301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.799334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.074 [2024-07-12 17:42:27.799357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.074 [2024-07-12 17:42:27.799786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.074 [2024-07-12 17:42:27.799999] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.074 [2024-07-12 17:42:27.800011] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.074 [2024-07-12 17:42:27.800021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.074 [2024-07-12 17:42:27.802662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.074 [2024-07-12 17:42:27.811629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.074 [2024-07-12 17:42:27.812097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.812408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.812441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.074 [2024-07-12 17:42:27.812464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.074 [2024-07-12 17:42:27.812743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.074 [2024-07-12 17:42:27.812919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.074 [2024-07-12 17:42:27.812931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.074 [2024-07-12 17:42:27.812941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.074 [2024-07-12 17:42:27.815612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.074 [2024-07-12 17:42:27.824696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.074 [2024-07-12 17:42:27.825131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.825397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.825430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.074 [2024-07-12 17:42:27.825453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.074 [2024-07-12 17:42:27.825693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.074 [2024-07-12 17:42:27.825846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.074 [2024-07-12 17:42:27.825858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.074 [2024-07-12 17:42:27.825868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.074 [2024-07-12 17:42:27.828533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.074 [2024-07-12 17:42:27.837894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.074 [2024-07-12 17:42:27.838230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.838433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.074 [2024-07-12 17:42:27.838450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.074 [2024-07-12 17:42:27.838460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.074 [2024-07-12 17:42:27.838682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.074 [2024-07-12 17:42:27.838880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.838893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.838903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.841702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.850655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.851094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.851417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.851450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.851473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.851771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.851948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.851961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.851971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.855501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.864097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.864545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.864829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.864860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.864894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.865023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.865220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.865233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.865242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.868085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.877106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.877627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.877914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.877945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.877967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.878311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.878465] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.878478] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.878488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.881373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.890151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.890659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.890972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.891004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.891026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.891303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.891503] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.891515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.891526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.894343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.902739] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.903213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.903528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.903560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.903583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.903966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.904277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.904291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.904301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.906866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.915799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.916296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.916534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.916569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.916593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.917073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.917299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.917311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.917321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.919957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.928766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.929156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.929426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.929460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.929482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.929865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.930234] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.930247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.930263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.933099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.941865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.942350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.942621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.942652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.942675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.942985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.943200] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.943218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.943232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.947539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.955420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.955870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.956137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.956176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.956200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.956546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.956979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.957004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.957025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.075 [2024-07-12 17:42:27.959730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.075 [2024-07-12 17:42:27.968341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.075 [2024-07-12 17:42:27.968728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.968933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.075 [2024-07-12 17:42:27.968970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.075 [2024-07-12 17:42:27.968993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.075 [2024-07-12 17:42:27.969388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.075 [2024-07-12 17:42:27.969871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.075 [2024-07-12 17:42:27.969895] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.075 [2024-07-12 17:42:27.969916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.076 [2024-07-12 17:42:27.973037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.076 [2024-07-12 17:42:27.981505] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.076 [2024-07-12 17:42:27.981942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:27.982201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:27.982232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.076 [2024-07-12 17:42:27.982268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.076 [2024-07-12 17:42:27.982601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.076 [2024-07-12 17:42:27.983033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.076 [2024-07-12 17:42:27.983058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.076 [2024-07-12 17:42:27.983078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.076 [2024-07-12 17:42:27.985579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.076 [2024-07-12 17:42:27.994646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.076 [2024-07-12 17:42:27.995052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:27.995247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:27.995270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.076 [2024-07-12 17:42:27.995285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.076 [2024-07-12 17:42:27.995484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.076 [2024-07-12 17:42:27.995659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.076 [2024-07-12 17:42:27.995672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.076 [2024-07-12 17:42:27.995682] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.076 [2024-07-12 17:42:27.998370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.076 [2024-07-12 17:42:28.007555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.076 [2024-07-12 17:42:28.008028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:28.008240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:28.008261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.076 [2024-07-12 17:42:28.008272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.076 [2024-07-12 17:42:28.008424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.076 [2024-07-12 17:42:28.008599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.076 [2024-07-12 17:42:28.008611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.076 [2024-07-12 17:42:28.008621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.076 [2024-07-12 17:42:28.011352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.076 [2024-07-12 17:42:28.020468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.076 [2024-07-12 17:42:28.020928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:28.021154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:28.021170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.076 [2024-07-12 17:42:28.021181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.076 [2024-07-12 17:42:28.021431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.076 [2024-07-12 17:42:28.021585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.076 [2024-07-12 17:42:28.021597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.076 [2024-07-12 17:42:28.021607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.076 [2024-07-12 17:42:28.024176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.076 [2024-07-12 17:42:28.033764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.076 [2024-07-12 17:42:28.034283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:28.034556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.076 [2024-07-12 17:42:28.034588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.076 [2024-07-12 17:42:28.034611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.076 [2024-07-12 17:42:28.035085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.076 [2024-07-12 17:42:28.035418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.076 [2024-07-12 17:42:28.035437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.076 [2024-07-12 17:42:28.035451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.076 [2024-07-12 17:42:28.039402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.337 [2024-07-12 17:42:28.046993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.337 [2024-07-12 17:42:28.047558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.337 [2024-07-12 17:42:28.047791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.337 [2024-07-12 17:42:28.047822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.337 [2024-07-12 17:42:28.047844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.337 [2024-07-12 17:42:28.048178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.337 [2024-07-12 17:42:28.048575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.337 [2024-07-12 17:42:28.048588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.337 [2024-07-12 17:42:28.048598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.337 [2024-07-12 17:42:28.051332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.337 [2024-07-12 17:42:28.059950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.337 [2024-07-12 17:42:28.060401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.337 [2024-07-12 17:42:28.060563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.337 [2024-07-12 17:42:28.060578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.337 [2024-07-12 17:42:28.060589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.337 [2024-07-12 17:42:28.060785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.337 [2024-07-12 17:42:28.060938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.337 [2024-07-12 17:42:28.060950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.337 [2024-07-12 17:42:28.060959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.337 [2024-07-12 17:42:28.063683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.337 [2024-07-12 17:42:28.072840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.337 [2024-07-12 17:42:28.073325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.337 [2024-07-12 17:42:28.073583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.337 [2024-07-12 17:42:28.073615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.337 [2024-07-12 17:42:28.073637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.337 [2024-07-12 17:42:28.073966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.074419] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.074445] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.074465] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.077321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.085872] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.086202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.086428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.086444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.086454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.086652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.086827] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.086838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.086847] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.089671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.098898] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.099334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.099544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.099575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.099598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.100026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.100471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.100498] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.100518] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.103355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.111842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.112249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.112592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.112623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.112644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.113173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.113412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.113428] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.113437] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.116449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.124510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.124796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.125050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.125081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.125102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.125449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.125659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.125670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.125680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.128367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.137352] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.137694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.137996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.138027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.138050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.138394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.138602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.138613] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.138623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.141239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.150212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.150575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.150738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.150769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.150790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.151173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.151620] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.151647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.151675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.154291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.163482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.163922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.164100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.164115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.164125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.164309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.164462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.164473] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.164483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.167253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.176451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.176820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.177004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.177035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.177056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.177543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.177697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.177709] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.177719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.180450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.189404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.189838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.190004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.190034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.190056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.338 [2024-07-12 17:42:28.190501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.338 [2024-07-12 17:42:28.190768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.338 [2024-07-12 17:42:28.190780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.338 [2024-07-12 17:42:28.190789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.338 [2024-07-12 17:42:28.193500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.338 [2024-07-12 17:42:28.202494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.338 [2024-07-12 17:42:28.202878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.203087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.338 [2024-07-12 17:42:28.203118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.338 [2024-07-12 17:42:28.203140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.339 [2024-07-12 17:42:28.203583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.339 [2024-07-12 17:42:28.203916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.339 [2024-07-12 17:42:28.203950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.339 [2024-07-12 17:42:28.203960] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.339 [2024-07-12 17:42:28.206742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.339 [2024-07-12 17:42:28.215492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.339 [2024-07-12 17:42:28.215870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.216098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.216113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.339 [2024-07-12 17:42:28.216123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.339 [2024-07-12 17:42:28.216329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.339 [2024-07-12 17:42:28.216481] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.339 [2024-07-12 17:42:28.216494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.339 [2024-07-12 17:42:28.216503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.339 [2024-07-12 17:42:28.219480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.339 [2024-07-12 17:42:28.228608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.339 [2024-07-12 17:42:28.228887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.229142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.229156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.339 [2024-07-12 17:42:28.229166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.339 [2024-07-12 17:42:28.229394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.339 [2024-07-12 17:42:28.229570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.339 [2024-07-12 17:42:28.229581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.339 [2024-07-12 17:42:28.229591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.339 [2024-07-12 17:42:28.232571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.339 [2024-07-12 17:42:28.241317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.339 [2024-07-12 17:42:28.241678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.241934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.241949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.339 [2024-07-12 17:42:28.241959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.339 [2024-07-12 17:42:28.242112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.339 [2024-07-12 17:42:28.242318] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.339 [2024-07-12 17:42:28.242331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.339 [2024-07-12 17:42:28.242341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.339 [2024-07-12 17:42:28.245067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.339 [2024-07-12 17:42:28.254322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.339 [2024-07-12 17:42:28.254678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.254932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.254947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.339 [2024-07-12 17:42:28.254957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.339 [2024-07-12 17:42:28.255108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.339 [2024-07-12 17:42:28.255237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.339 [2024-07-12 17:42:28.255248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.339 [2024-07-12 17:42:28.255265] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.339 [2024-07-12 17:42:28.257973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.339 [2024-07-12 17:42:28.267292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.339 [2024-07-12 17:42:28.267656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.267856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.267870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.339 [2024-07-12 17:42:28.267880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.339 [2024-07-12 17:42:28.268100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.339 [2024-07-12 17:42:28.268304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.339 [2024-07-12 17:42:28.268316] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.339 [2024-07-12 17:42:28.268326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.339 [2024-07-12 17:42:28.271074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 118001 Killed "${NVMF_APP[@]}" "$@" 00:31:49.339 17:42:28 -- host/bdevperf.sh@36 -- # tgt_init 00:31:49.339 17:42:28 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:49.339 17:42:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:49.339 17:42:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:49.339 17:42:28 -- common/autotest_common.sh@10 -- # set +x 00:31:49.339 [2024-07-12 17:42:28.280159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.339 17:42:28 -- nvmf/common.sh@469 -- # nvmfpid=119510 00:31:49.339 [2024-07-12 17:42:28.280544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.280799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.280814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.339 17:42:28 -- nvmf/common.sh@470 -- # waitforlisten 119510 00:31:49.339 [2024-07-12 17:42:28.280824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.339 17:42:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:49.339 [2024-07-12 17:42:28.281069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.339 17:42:28 -- common/autotest_common.sh@819 -- # '[' -z 119510 ']' 00:31:49.339 [2024-07-12 17:42:28.281244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.339 [2024-07-12 17:42:28.281264] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.339 [2024-07-12 17:42:28.281274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.339 17:42:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.339 17:42:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:49.339 17:42:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.339 17:42:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:49.339 17:42:28 -- common/autotest_common.sh@10 -- # set +x 00:31:49.339 [2024-07-12 17:42:28.283911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.339 [2024-07-12 17:42:28.293084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.339 [2024-07-12 17:42:28.293551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.293742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.339 [2024-07-12 17:42:28.293756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.339 [2024-07-12 17:42:28.293766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.339 [2024-07-12 17:42:28.293940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.339 [2024-07-12 17:42:28.294092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.339 [2024-07-12 17:42:28.294103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.339 [2024-07-12 17:42:28.294113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.339 [2024-07-12 17:42:28.296755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.600 [2024-07-12 17:42:28.306164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.600 [2024-07-12 17:42:28.306588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.600 [2024-07-12 17:42:28.306768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.600 [2024-07-12 17:42:28.306786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.600 [2024-07-12 17:42:28.306797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.600 [2024-07-12 17:42:28.306948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.307077] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.307088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.307098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.309634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 [2024-07-12 17:42:28.319027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.319487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.319631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.319646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.319656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.319832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.320008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.320019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.320029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.322808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 [2024-07-12 17:42:28.329358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:49.601 [2024-07-12 17:42:28.329416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.601 [2024-07-12 17:42:28.332089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.332528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.332652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.332666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.332677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.332851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.332980] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.332992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.333002] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.335786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 [2024-07-12 17:42:28.345210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.345654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.345846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.345860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.345871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.346045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.346242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.346260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.346271] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.349111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 [2024-07-12 17:42:28.358514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.358878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.359055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.359070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.359080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.359287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.359485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.359496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.359506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.362327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.601 [2024-07-12 17:42:28.371823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.372250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.372412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.372427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.372438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.372612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.372763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.372775] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.372784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.375500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 [2024-07-12 17:42:28.384953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.385358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.385477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.385492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.385502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.385631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.385761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.385772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.385782] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.388494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 [2024-07-12 17:42:28.398197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.398611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.398866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.398881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.398891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.399064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.399217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.399229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.399238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.401794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 [2024-07-12 17:42:28.406693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:49.601 [2024-07-12 17:42:28.411276] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.411646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.411900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.411915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.411926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.412032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.601 [2024-07-12 17:42:28.412207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.601 [2024-07-12 17:42:28.412219] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.601 [2024-07-12 17:42:28.412229] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.601 [2024-07-12 17:42:28.415237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.601 [2024-07-12 17:42:28.424389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.601 [2024-07-12 17:42:28.424719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.424971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.601 [2024-07-12 17:42:28.424992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.601 [2024-07-12 17:42:28.425003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.601 [2024-07-12 17:42:28.425201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.425406] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.602 [2024-07-12 17:42:28.425419] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.602 [2024-07-12 17:42:28.425429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.602 [2024-07-12 17:42:28.428205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.602 [2024-07-12 17:42:28.437214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.602 [2024-07-12 17:42:28.437563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.437795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.437811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.602 [2024-07-12 17:42:28.437822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.602 [2024-07-12 17:42:28.438046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.438199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.602 [2024-07-12 17:42:28.438210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.602 [2024-07-12 17:42:28.438221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.602 [2024-07-12 17:42:28.440984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.602 [2024-07-12 17:42:28.448660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:49.602 [2024-07-12 17:42:28.448795] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.602 [2024-07-12 17:42:28.448807] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.602 [2024-07-12 17:42:28.448816] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.602 [2024-07-12 17:42:28.448876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.602 [2024-07-12 17:42:28.448949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.602 [2024-07-12 17:42:28.449062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.602 [2024-07-12 17:42:28.450315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.602 [2024-07-12 17:42:28.450662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.450856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.450872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.602 [2024-07-12 17:42:28.450883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.602 [2024-07-12 17:42:28.451106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.451291] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.602 [2024-07-12 17:42:28.451305] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.602 [2024-07-12 17:42:28.451320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.602 [2024-07-12 17:42:28.454006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.602 [2024-07-12 17:42:28.463343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.602 [2024-07-12 17:42:28.463818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.463952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.463968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.602 [2024-07-12 17:42:28.463979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.602 [2024-07-12 17:42:28.464133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.464340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.602 [2024-07-12 17:42:28.464354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.602 [2024-07-12 17:42:28.464365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.602 [2024-07-12 17:42:28.466827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.602 [2024-07-12 17:42:28.476577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.602 [2024-07-12 17:42:28.476901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.477157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.477173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.602 [2024-07-12 17:42:28.477185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.602 [2024-07-12 17:42:28.477391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.477568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.602 [2024-07-12 17:42:28.477581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.602 [2024-07-12 17:42:28.477591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.602 [2024-07-12 17:42:28.480047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.602 [2024-07-12 17:42:28.489564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.602 [2024-07-12 17:42:28.490016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.490294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.490310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.602 [2024-07-12 17:42:28.490322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.602 [2024-07-12 17:42:28.490520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.490650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.602 [2024-07-12 17:42:28.490662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.602 [2024-07-12 17:42:28.490672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.602 [2024-07-12 17:42:28.493658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.602 [2024-07-12 17:42:28.502534] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.602 [2024-07-12 17:42:28.502895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.503092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.503108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.602 [2024-07-12 17:42:28.503119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.602 [2024-07-12 17:42:28.503347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.503523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.602 [2024-07-12 17:42:28.503535] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.602 [2024-07-12 17:42:28.503544] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.602 [2024-07-12 17:42:28.506435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.602 [2024-07-12 17:42:28.515669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.602 [2024-07-12 17:42:28.516123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.516299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.516315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.602 [2024-07-12 17:42:28.516326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.602 [2024-07-12 17:42:28.516500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.516698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.602 [2024-07-12 17:42:28.516710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.602 [2024-07-12 17:42:28.516719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.602 [2024-07-12 17:42:28.519584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.602 [2024-07-12 17:42:28.528383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.602 [2024-07-12 17:42:28.528808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.529049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.602 [2024-07-12 17:42:28.529065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.602 [2024-07-12 17:42:28.529076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.602 [2024-07-12 17:42:28.529228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.602 [2024-07-12 17:42:28.529455] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.603 [2024-07-12 17:42:28.529468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.603 [2024-07-12 17:42:28.529478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.603 [2024-07-12 17:42:28.532179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.603 [2024-07-12 17:42:28.541327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.603 [2024-07-12 17:42:28.541799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.603 [2024-07-12 17:42:28.541955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.603 [2024-07-12 17:42:28.541970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.603 [2024-07-12 17:42:28.541980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.603 [2024-07-12 17:42:28.542109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.603 [2024-07-12 17:42:28.542314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.603 [2024-07-12 17:42:28.542327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.603 [2024-07-12 17:42:28.542337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.603 [2024-07-12 17:42:28.545173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.603 [2024-07-12 17:42:28.554052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.603 [2024-07-12 17:42:28.554469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.603 [2024-07-12 17:42:28.554721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.603 [2024-07-12 17:42:28.554736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.603 [2024-07-12 17:42:28.554747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.603 [2024-07-12 17:42:28.554945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.603 [2024-07-12 17:42:28.555142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.603 [2024-07-12 17:42:28.555154] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.603 [2024-07-12 17:42:28.555163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.603 [2024-07-12 17:42:28.557847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.603 [2024-07-12 17:42:28.566913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.863 [2024-07-12 17:42:28.567396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.567657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.567672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.863 [2024-07-12 17:42:28.567682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.863 [2024-07-12 17:42:28.567835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.863 [2024-07-12 17:42:28.568033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.863 [2024-07-12 17:42:28.568044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.863 [2024-07-12 17:42:28.568054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.863 [2024-07-12 17:42:28.570627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.863 [2024-07-12 17:42:28.579915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.863 [2024-07-12 17:42:28.580384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.580666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.580681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.863 [2024-07-12 17:42:28.580691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.863 [2024-07-12 17:42:28.580867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.863 [2024-07-12 17:42:28.581064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.863 [2024-07-12 17:42:28.581075] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.863 [2024-07-12 17:42:28.581084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.863 [2024-07-12 17:42:28.583924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.863 [2024-07-12 17:42:28.592901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.863 [2024-07-12 17:42:28.593287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.593537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.593552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.863 [2024-07-12 17:42:28.593562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.863 [2024-07-12 17:42:28.593781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.863 [2024-07-12 17:42:28.594002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.863 [2024-07-12 17:42:28.594014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.863 [2024-07-12 17:42:28.594023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.863 [2024-07-12 17:42:28.596681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.863 [2024-07-12 17:42:28.605628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.863 [2024-07-12 17:42:28.606075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.606328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.606344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.863 [2024-07-12 17:42:28.606354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.863 [2024-07-12 17:42:28.606529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.863 [2024-07-12 17:42:28.606726] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.863 [2024-07-12 17:42:28.606737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.863 [2024-07-12 17:42:28.606746] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.863 [2024-07-12 17:42:28.609582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.863 [2024-07-12 17:42:28.618649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.863 [2024-07-12 17:42:28.619137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.619314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.619330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.863 [2024-07-12 17:42:28.619340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.863 [2024-07-12 17:42:28.619492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.863 [2024-07-12 17:42:28.619666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.863 [2024-07-12 17:42:28.619677] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.863 [2024-07-12 17:42:28.619686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.863 [2024-07-12 17:42:28.622506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.863 [2024-07-12 17:42:28.631608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.863 [2024-07-12 17:42:28.632014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.863 [2024-07-12 17:42:28.632270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.632286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.632296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.632447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.632667] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.632679] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.632688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.635278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.644561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.645013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.645193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.645208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.645218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.645397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.645572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.645584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.645593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.648206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.657741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.658217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.658446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.658462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.658476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.658627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.658824] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.658835] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.658845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.661685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.670637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.671019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.671185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.671201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.671211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.671346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.671544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.671556] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.671564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.674175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.683916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.684393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.684621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.684636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.684646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.684821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.685041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.685053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.685062] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.687700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.696902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.697286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.697454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.697469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.697479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.697657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.697831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.697843] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.697852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.700692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.710025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.710458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.710661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.710676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.710686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.710906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.711126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.711138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.711147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.713963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.723076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.723519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.723775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.723790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.723800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.723950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.724124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.724136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.724145] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.726870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.736131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.736729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.737009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.737024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.864 [2024-07-12 17:42:28.737034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.864 [2024-07-12 17:42:28.737210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.864 [2024-07-12 17:42:28.737464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.864 [2024-07-12 17:42:28.737476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.864 [2024-07-12 17:42:28.737486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.864 [2024-07-12 17:42:28.740186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.864 [2024-07-12 17:42:28.749141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.864 [2024-07-12 17:42:28.749643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.749870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.864 [2024-07-12 17:42:28.749885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.865 [2024-07-12 17:42:28.749895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.865 [2024-07-12 17:42:28.750115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.865 [2024-07-12 17:42:28.750317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.865 [2024-07-12 17:42:28.750329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.865 [2024-07-12 17:42:28.750338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.865 [2024-07-12 17:42:28.753039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.865 [2024-07-12 17:42:28.761938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.865 [2024-07-12 17:42:28.762246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.762477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.762492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.865 [2024-07-12 17:42:28.762502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.865 [2024-07-12 17:42:28.762722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.865 [2024-07-12 17:42:28.762919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.865 [2024-07-12 17:42:28.762930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.865 [2024-07-12 17:42:28.762939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.865 [2024-07-12 17:42:28.765779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.865 [2024-07-12 17:42:28.774983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.865 [2024-07-12 17:42:28.775403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.775604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.775620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.865 [2024-07-12 17:42:28.775629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.865 [2024-07-12 17:42:28.775803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.865 [2024-07-12 17:42:28.776001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.865 [2024-07-12 17:42:28.776017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.865 [2024-07-12 17:42:28.776027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.865 [2024-07-12 17:42:28.778601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.865 [2024-07-12 17:42:28.787900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.865 [2024-07-12 17:42:28.788320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.788517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.788533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.865 [2024-07-12 17:42:28.788543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.865 [2024-07-12 17:42:28.788741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.865 [2024-07-12 17:42:28.788938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.865 [2024-07-12 17:42:28.788950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.865 [2024-07-12 17:42:28.788959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.865 [2024-07-12 17:42:28.791827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.865 [2024-07-12 17:42:28.800890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.865 [2024-07-12 17:42:28.801320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.801575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.801590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.865 [2024-07-12 17:42:28.801600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.865 [2024-07-12 17:42:28.801776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.865 [2024-07-12 17:42:28.801905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.865 [2024-07-12 17:42:28.801917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.865 [2024-07-12 17:42:28.801926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.865 [2024-07-12 17:42:28.804588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.865 [2024-07-12 17:42:28.813652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.865 [2024-07-12 17:42:28.814101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.814352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.814368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.865 [2024-07-12 17:42:28.814378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.865 [2024-07-12 17:42:28.814597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.865 [2024-07-12 17:42:28.814795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.865 [2024-07-12 17:42:28.814809] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.865 [2024-07-12 17:42:28.814824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.865 [2024-07-12 17:42:28.817722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.865 [2024-07-12 17:42:28.826628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:49.865 [2024-07-12 17:42:28.827035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.827209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.865 [2024-07-12 17:42:28.827224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:49.865 [2024-07-12 17:42:28.827234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:49.865 [2024-07-12 17:42:28.827461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:49.865 [2024-07-12 17:42:28.827659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.865 [2024-07-12 17:42:28.827671] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:49.865 [2024-07-12 17:42:28.827680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.865 [2024-07-12 17:42:28.830224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.125 [2024-07-12 17:42:28.839900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.125 [2024-07-12 17:42:28.840390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.125 [2024-07-12 17:42:28.840616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.125 [2024-07-12 17:42:28.840631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.125 [2024-07-12 17:42:28.840642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.125 [2024-07-12 17:42:28.840840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.125 [2024-07-12 17:42:28.840970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.125 [2024-07-12 17:42:28.840981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.125 [2024-07-12 17:42:28.840992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.125 [2024-07-12 17:42:28.843720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.125 [2024-07-12 17:42:28.852782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.125 [2024-07-12 17:42:28.853237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.125 [2024-07-12 17:42:28.853407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.125 [2024-07-12 17:42:28.853423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.853434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.853585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.853760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.853771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.853781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.856493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.865939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.866269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.866497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.866512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.866522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.866650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.866802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.866814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.866823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.869619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.878790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.879203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.879462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.879478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.879488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.879709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.879884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.879896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.879905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.882678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.891848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.892172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.892422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.892437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.892448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.892599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.892750] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.892762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.892772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.895518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.904994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.905391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.905643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.905658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.905669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.905820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.905949] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.905961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.905970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.908628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.918008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.918463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.918720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.918735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.918746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.918943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.919117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.919129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.919138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.921890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.930830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.931273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.931442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.931457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.931467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.931620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.931841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.931853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.931863] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.934500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.943906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.944316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.944492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.944507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.944518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.944669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.944866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.944877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.944886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.947749] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.956806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.957236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.957413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.957428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.957438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.957590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.957742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.957754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.957763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.960512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.969887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.970296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.970469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.970484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.126 [2024-07-12 17:42:28.970494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.126 [2024-07-12 17:42:28.970691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.126 [2024-07-12 17:42:28.970888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.126 [2024-07-12 17:42:28.970899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.126 [2024-07-12 17:42:28.970908] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.126 [2024-07-12 17:42:28.973478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.126 [2024-07-12 17:42:28.983143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.126 [2024-07-12 17:42:28.983544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.126 [2024-07-12 17:42:28.983820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:28.983835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:28.983845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:28.984043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:28.984240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:28.984251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:28.984267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:28.987037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.127 [2024-07-12 17:42:28.996023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.127 [2024-07-12 17:42:28.996429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:28.996684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:28.996699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:28.996709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:28.996838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:28.997012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:28.997023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:28.997032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:28.999804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.127 [2024-07-12 17:42:29.008981] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.127 [2024-07-12 17:42:29.009409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.009529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.009543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:29.009553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:29.009660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:29.009879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:29.009891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:29.009900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:29.012631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.127 [2024-07-12 17:42:29.021891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.127 [2024-07-12 17:42:29.022368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.022564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.022579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:29.022594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:29.022699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:29.022874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:29.022885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:29.022894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:29.025490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.127 [2024-07-12 17:42:29.034863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.127 [2024-07-12 17:42:29.035192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.035449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.035465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:29.035475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:29.035627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:29.035756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:29.035768] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:29.035777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:29.038253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.127 [2024-07-12 17:42:29.047841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.127 [2024-07-12 17:42:29.048271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.048521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.048536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:29.048546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:29.048718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:29.048871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:29.048882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:29.048891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:29.051710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.127 [2024-07-12 17:42:29.061360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.127 [2024-07-12 17:42:29.061761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.062008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.062023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:29.062033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:29.062262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:29.062414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:29.062426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:29.062435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:29.065343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.127 [2024-07-12 17:42:29.074539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.127 [2024-07-12 17:42:29.074889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.075137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.075152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:29.075162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:29.075319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:29.075449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:29.075461] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:29.075470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:29.078170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.127 [2024-07-12 17:42:29.087480] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.127 [2024-07-12 17:42:29.087951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.088180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.127 [2024-07-12 17:42:29.088195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.127 [2024-07-12 17:42:29.088205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.127 [2024-07-12 17:42:29.088362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.127 [2024-07-12 17:42:29.088582] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.127 [2024-07-12 17:42:29.088594] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.127 [2024-07-12 17:42:29.088603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.127 [2024-07-12 17:42:29.091149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.388 [2024-07-12 17:42:29.100488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.388 [2024-07-12 17:42:29.100892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.101145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.101160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.388 [2024-07-12 17:42:29.101170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.388 [2024-07-12 17:42:29.101331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.388 [2024-07-12 17:42:29.101553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.388 [2024-07-12 17:42:29.101565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.388 [2024-07-12 17:42:29.101574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.388 [2024-07-12 17:42:29.104414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.388 [2024-07-12 17:42:29.113635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.388 [2024-07-12 17:42:29.113991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.114244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.114265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.388 [2024-07-12 17:42:29.114275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.388 [2024-07-12 17:42:29.114449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.388 [2024-07-12 17:42:29.114600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.388 [2024-07-12 17:42:29.114612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.388 [2024-07-12 17:42:29.114622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.388 [2024-07-12 17:42:29.117558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.388 [2024-07-12 17:42:29.126454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.388 [2024-07-12 17:42:29.126817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.127085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.127099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.388 [2024-07-12 17:42:29.127110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.388 [2024-07-12 17:42:29.127239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.388 [2024-07-12 17:42:29.127418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.388 [2024-07-12 17:42:29.127430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.388 [2024-07-12 17:42:29.127439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.388 [2024-07-12 17:42:29.130228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.388 [2024-07-12 17:42:29.139173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.388 [2024-07-12 17:42:29.139555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.139804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.139819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.388 [2024-07-12 17:42:29.139829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.388 [2024-07-12 17:42:29.140004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.388 [2024-07-12 17:42:29.140210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.388 [2024-07-12 17:42:29.140222] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.388 [2024-07-12 17:42:29.140231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.388 [2024-07-12 17:42:29.143096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.388 [2024-07-12 17:42:29.152247] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.388 [2024-07-12 17:42:29.152672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.152925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.152940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.388 [2024-07-12 17:42:29.152950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.388 [2024-07-12 17:42:29.153125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.388 [2024-07-12 17:42:29.153260] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.388 [2024-07-12 17:42:29.153273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.388 [2024-07-12 17:42:29.153282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.388 [2024-07-12 17:42:29.156115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.388 [2024-07-12 17:42:29.165235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.388 [2024-07-12 17:42:29.165625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.165874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.165889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.388 [2024-07-12 17:42:29.165899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.388 [2024-07-12 17:42:29.166118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.388 [2024-07-12 17:42:29.166321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.388 [2024-07-12 17:42:29.166334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.388 [2024-07-12 17:42:29.166343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.388 [2024-07-12 17:42:29.169110] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.388 [2024-07-12 17:42:29.178105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.388 [2024-07-12 17:42:29.178639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.178891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.388 [2024-07-12 17:42:29.178906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.388 [2024-07-12 17:42:29.178917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.388 [2024-07-12 17:42:29.179115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.388 [2024-07-12 17:42:29.179317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.388 [2024-07-12 17:42:29.179330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.179344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 [2024-07-12 17:42:29.182042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.191174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 [2024-07-12 17:42:29.191632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.191882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.191897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.191908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.192104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 [2024-07-12 17:42:29.192306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.192319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.192329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 [2024-07-12 17:42:29.195236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.204045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 [2024-07-12 17:42:29.204449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.204705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.204720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.204730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.204928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 [2024-07-12 17:42:29.205125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.205137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.205147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 [2024-07-12 17:42:29.207783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.217151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 [2024-07-12 17:42:29.217563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.217790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.217805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.217815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.217944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 [2024-07-12 17:42:29.218141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.218153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.218166] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 [2024-07-12 17:42:29.220561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.230264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 [2024-07-12 17:42:29.230764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.230891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.230906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.230916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.231135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 [2024-07-12 17:42:29.231315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.231327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.231337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 [2024-07-12 17:42:29.234064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.243356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 [2024-07-12 17:42:29.243767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.244023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.244039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.244049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.244179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 [2024-07-12 17:42:29.244382] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.244395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.244405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 [2024-07-12 17:42:29.247057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.256533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 [2024-07-12 17:42:29.256965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.257189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.257205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.257215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.257351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 [2024-07-12 17:42:29.257594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.257606] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.257615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 17:42:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:50.389 17:42:29 -- common/autotest_common.sh@852 -- # return 0 00:31:50.389 17:42:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:50.389 [2024-07-12 17:42:29.260228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 17:42:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:50.389 17:42:29 -- common/autotest_common.sh@10 -- # set +x 00:31:50.389 [2024-07-12 17:42:29.269602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 [2024-07-12 17:42:29.269861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.269985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.269999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.270010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.270185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 [2024-07-12 17:42:29.270388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.270400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.270410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 [2024-07-12 17:42:29.272935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.282669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 [2024-07-12 17:42:29.283141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.283420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.283438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.283448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.283670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 [2024-07-12 17:42:29.283867] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.283880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.283890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 [2024-07-12 17:42:29.286791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.295822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.389 17:42:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.389 [2024-07-12 17:42:29.296286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 17:42:29 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:50.389 [2024-07-12 17:42:29.296541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.389 [2024-07-12 17:42:29.296557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.389 [2024-07-12 17:42:29.296568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.389 [2024-07-12 17:42:29.296742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.389 17:42:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.389 [2024-07-12 17:42:29.296850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.389 [2024-07-12 17:42:29.296866] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.389 [2024-07-12 17:42:29.296876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.389 17:42:29 -- common/autotest_common.sh@10 -- # set +x 00:31:50.389 [2024-07-12 17:42:29.299516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.389 [2024-07-12 17:42:29.301382] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.389 17:42:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.389 17:42:29 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:50.390 17:42:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.390 17:42:29 -- common/autotest_common.sh@10 -- # set +x 00:31:50.390 [2024-07-12 17:42:29.309030] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.390 [2024-07-12 17:42:29.309375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.390 [2024-07-12 17:42:29.309551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.390 [2024-07-12 17:42:29.309566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.390 [2024-07-12 17:42:29.309576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.390 [2024-07-12 17:42:29.309795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.390 [2024-07-12 17:42:29.309947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.390 [2024-07-12 17:42:29.309958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.390 [2024-07-12 17:42:29.309967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.390 [2024-07-12 17:42:29.312811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.390 [2024-07-12 17:42:29.322091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.390 [2024-07-12 17:42:29.322526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.390 [2024-07-12 17:42:29.322783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.390 [2024-07-12 17:42:29.322799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.390 [2024-07-12 17:42:29.322809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.390 [2024-07-12 17:42:29.322984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.390 [2024-07-12 17:42:29.323114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.390 [2024-07-12 17:42:29.323125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.390 [2024-07-12 17:42:29.323134] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.390 [2024-07-12 17:42:29.325524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.390 [2024-07-12 17:42:29.335332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.390 [2024-07-12 17:42:29.335783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.390 [2024-07-12 17:42:29.336037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.390 [2024-07-12 17:42:29.336052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.390 [2024-07-12 17:42:29.336062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.390 [2024-07-12 17:42:29.336292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.390 [2024-07-12 17:42:29.336468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.390 [2024-07-12 17:42:29.336479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.390 [2024-07-12 17:42:29.336488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.390 [2024-07-12 17:42:29.339120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.390 [2024-07-12 17:42:29.348240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.390 [2024-07-12 17:42:29.348674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.390 Malloc0 00:31:50.390 [2024-07-12 17:42:29.348900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.390 [2024-07-12 17:42:29.348915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.390 [2024-07-12 17:42:29.348926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.390 [2024-07-12 17:42:29.349077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.390 [2024-07-12 17:42:29.349205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.390 [2024-07-12 17:42:29.349216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.390 [2024-07-12 17:42:29.349226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.390 17:42:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.390 17:42:29 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:50.390 17:42:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.390 17:42:29 -- common/autotest_common.sh@10 -- # set +x 00:31:50.390 [2024-07-12 17:42:29.351954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.648 17:42:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.648 [2024-07-12 17:42:29.361385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.648 17:42:29 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:50.648 17:42:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.648 [2024-07-12 17:42:29.361795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.648 [2024-07-12 17:42:29.362027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:50.648 17:42:29 -- common/autotest_common.sh@10 -- # set +x 00:31:50.648 [2024-07-12 17:42:29.362042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383380 with addr=10.0.0.2, port=4420 00:31:50.648 [2024-07-12 17:42:29.362053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383380 is same with the state(5) to be set 00:31:50.648 [2024-07-12 17:42:29.362277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383380 (9): Bad file descriptor 00:31:50.648 [2024-07-12 17:42:29.362430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:50.648 [2024-07-12 17:42:29.362442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:50.648 [2024-07-12 17:42:29.362452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:50.648 [2024-07-12 17:42:29.364972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:50.648 17:42:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.648 17:42:29 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.648 17:42:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.648 17:42:29 -- common/autotest_common.sh@10 -- # set +x 00:31:50.648 [2024-07-12 17:42:29.372685] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.648 [2024-07-12 17:42:29.374215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:50.648 17:42:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.648 17:42:29 -- host/bdevperf.sh@38 -- # wait 118460 00:31:50.648 [2024-07-12 17:42:29.404447] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:58.764 00:31:58.764 Latency(us) 00:31:58.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.764 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:58.764 Verification LBA range: start 0x0 length 0x4000 00:31:58.764 Nvme1n1 : 15.01 8363.45 32.67 12764.43 0.00 6040.08 577.16 20852.36 00:31:58.764 =================================================================================================================== 00:31:58.764 Total : 8363.45 32.67 12764.43 0.00 6040.08 577.16 20852.36 00:31:59.022 17:42:37 -- host/bdevperf.sh@39 -- # sync 00:31:59.022 17:42:37 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:59.022 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.022 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:31:59.022 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.022 17:42:37 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:59.022 17:42:37 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:59.022 17:42:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:59.022 17:42:37 -- nvmf/common.sh@116 -- # sync 00:31:59.022 17:42:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:59.022 17:42:37 -- nvmf/common.sh@119 -- # set +e 00:31:59.022 17:42:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:59.022 17:42:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:59.022 rmmod nvme_tcp 00:31:59.022 rmmod nvme_fabrics 00:31:59.022 rmmod nvme_keyring 00:31:59.022 17:42:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:59.022 17:42:37 -- nvmf/common.sh@123 -- # set -e 00:31:59.022 17:42:37 -- nvmf/common.sh@124 -- # return 0 00:31:59.022 17:42:37 -- nvmf/common.sh@477 -- # '[' -n 119510 ']' 00:31:59.022 17:42:37 -- nvmf/common.sh@478 -- # killprocess 119510 00:31:59.022 17:42:37 -- common/autotest_common.sh@926 -- # '[' -z 119510 ']' 00:31:59.022 17:42:37 -- common/autotest_common.sh@930 -- # kill -0 119510 00:31:59.022 17:42:37 -- common/autotest_common.sh@931 -- # uname 00:31:59.022 17:42:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:59.022 17:42:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119510 00:31:59.022 17:42:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:59.022 17:42:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:59.022 17:42:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119510' 00:31:59.022 killing process with pid 119510 00:31:59.022 17:42:37 -- common/autotest_common.sh@945 -- # kill 119510 00:31:59.022 17:42:37 -- common/autotest_common.sh@950 -- # wait 119510 00:31:59.280 17:42:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:59.280 17:42:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:59.280 17:42:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:59.280 17:42:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:59.280 17:42:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:59.280 17:42:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.280 17:42:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.280 17:42:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.815 17:42:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:01.815 00:32:01.815 real 0m26.218s 00:32:01.815 user 1m2.942s 00:32:01.815 sys 0m6.354s 00:32:01.815 17:42:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.815 17:42:40 -- common/autotest_common.sh@10 -- # set +x 00:32:01.815 ************************************ 00:32:01.815 END TEST nvmf_bdevperf 00:32:01.815 ************************************ 00:32:01.815 17:42:40 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:01.815 17:42:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:01.815 17:42:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:01.815 17:42:40 -- common/autotest_common.sh@10 -- # set +x 00:32:01.815 ************************************ 00:32:01.815 START TEST nvmf_target_disconnect 00:32:01.815 ************************************ 00:32:01.815 17:42:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:01.815 * Looking for test storage... 00:32:01.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:01.815 17:42:40 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.815 17:42:40 -- nvmf/common.sh@7 -- # uname -s 00:32:01.815 17:42:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.815 17:42:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.815 17:42:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.815 17:42:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.815 17:42:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.815 17:42:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.815 17:42:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.815 17:42:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.815 17:42:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.815 17:42:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.815 17:42:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:01.815 17:42:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:01.815 17:42:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.815 17:42:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.815 17:42:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.815 17:42:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.815 17:42:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.815 17:42:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.815 17:42:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.815 17:42:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.815 17:42:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.815 17:42:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.815 17:42:40 -- paths/export.sh@5 -- # export PATH 00:32:01.815 17:42:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.815 17:42:40 -- nvmf/common.sh@46 -- # : 0 00:32:01.815 17:42:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:01.815 17:42:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:01.815 17:42:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:01.815 17:42:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.815 17:42:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.815 17:42:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:01.815 17:42:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:01.815 17:42:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:01.815 17:42:40 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:01.815 17:42:40 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:01.815 17:42:40 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:01.815 17:42:40 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:32:01.815 17:42:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:01.815 17:42:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.815 17:42:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:01.815 17:42:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:01.815 17:42:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:01.815 17:42:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.816 17:42:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.816 17:42:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.816 17:42:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:01.816 17:42:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:01.816 17:42:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:01.816 17:42:40 -- common/autotest_common.sh@10 -- # set +x 00:32:07.086 17:42:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:07.086 17:42:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:07.086 17:42:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:07.086 17:42:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:07.086 17:42:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:07.086 17:42:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:07.086 17:42:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:07.086 17:42:45 -- nvmf/common.sh@294 -- # net_devs=() 00:32:07.086 17:42:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:07.086 17:42:45 -- nvmf/common.sh@295 -- # e810=() 00:32:07.086 17:42:45 -- nvmf/common.sh@295 -- # local -ga e810 00:32:07.086 17:42:45 -- nvmf/common.sh@296 -- # x722=() 00:32:07.086 17:42:45 -- nvmf/common.sh@296 -- # local -ga x722 00:32:07.087 17:42:45 -- nvmf/common.sh@297 -- # mlx=() 00:32:07.087 17:42:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:07.087 17:42:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.087 17:42:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:07.087 17:42:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:07.087 17:42:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:07.087 17:42:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:07.087 17:42:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:07.087 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:07.087 17:42:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:07.087 17:42:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:07.087 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:07.087 17:42:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:07.087 17:42:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:07.087 17:42:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.087 17:42:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:07.087 17:42:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.087 17:42:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:07.087 Found net devices under 0000:af:00.0: cvl_0_0 00:32:07.087 17:42:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.087 17:42:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:07.087 17:42:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.087 17:42:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:07.087 17:42:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.087 17:42:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:07.087 Found net devices under 0000:af:00.1: cvl_0_1 00:32:07.087 17:42:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.087 17:42:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:07.087 17:42:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:07.087 17:42:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:07.087 17:42:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.087 17:42:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.087 17:42:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.087 17:42:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:07.087 17:42:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.087 17:42:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.087 17:42:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:07.087 17:42:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.087 17:42:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.087 17:42:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:07.087 17:42:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:07.087 17:42:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.087 17:42:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.087 17:42:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.087 17:42:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.087 17:42:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:07.087 17:42:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.087 17:42:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.087 17:42:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.087 17:42:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:07.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:32:07.087 00:32:07.087 --- 10.0.0.2 ping statistics --- 00:32:07.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.087 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:32:07.087 17:42:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:32:07.087 00:32:07.087 --- 10.0.0.1 ping statistics --- 00:32:07.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.087 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:32:07.087 17:42:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.087 17:42:45 -- nvmf/common.sh@410 -- # return 0 00:32:07.087 17:42:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:07.087 17:42:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.087 17:42:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:07.087 17:42:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.087 17:42:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:07.087 17:42:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:07.087 17:42:45 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:07.087 17:42:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:07.087 17:42:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.087 17:42:45 -- common/autotest_common.sh@10 -- # set +x 00:32:07.087 ************************************ 00:32:07.087 START TEST nvmf_target_disconnect_tc1 00:32:07.087 ************************************ 00:32:07.087 17:42:45 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:32:07.087 17:42:45 -- host/target_disconnect.sh@32 -- # set +e 00:32:07.087 17:42:45 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:07.087 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.087 [2024-07-12 17:42:46.024042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.087 [2024-07-12 17:42:46.024349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.087 [2024-07-12 17:42:46.024390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11aeae0 with addr=10.0.0.2, port=4420 00:32:07.087 [2024-07-12 17:42:46.024445] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:07.087 [2024-07-12 17:42:46.024481] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:07.087 [2024-07-12 17:42:46.024502] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:07.087 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:07.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:07.087 Initializing NVMe Controllers 00:32:07.087 17:42:46 -- host/target_disconnect.sh@33 -- # trap - ERR 00:32:07.087 17:42:46 -- host/target_disconnect.sh@33 -- # print_backtrace 00:32:07.087 17:42:46 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:32:07.087 17:42:46 -- common/autotest_common.sh@1132 -- # return 0 00:32:07.087 17:42:46 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:32:07.087 17:42:46 -- host/target_disconnect.sh@41 -- # set -e 00:32:07.087 00:32:07.087 real 0m0.115s 00:32:07.087 user 0m0.046s 00:32:07.087 sys 0m0.067s 00:32:07.087 17:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:07.087 17:42:46 -- common/autotest_common.sh@10 -- # set +x 00:32:07.087 ************************************ 00:32:07.087 END TEST nvmf_target_disconnect_tc1 00:32:07.087 ************************************ 00:32:07.344 17:42:46 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:07.344 17:42:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:07.344 17:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.344 17:42:46 -- common/autotest_common.sh@10 -- # set +x 00:32:07.344 ************************************ 00:32:07.344 START TEST nvmf_target_disconnect_tc2 00:32:07.344 ************************************ 00:32:07.345 17:42:46 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:32:07.345 17:42:46 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:32:07.345 17:42:46 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:07.345 17:42:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:07.345 17:42:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:07.345 17:42:46 -- common/autotest_common.sh@10 -- # set +x 00:32:07.345 17:42:46 -- nvmf/common.sh@469 -- # nvmfpid=124731 00:32:07.345 17:42:46 -- nvmf/common.sh@470 -- # waitforlisten 124731 00:32:07.345 17:42:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:07.345 17:42:46 -- common/autotest_common.sh@819 -- # '[' -z 124731 ']' 00:32:07.345 17:42:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.345 17:42:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:07.345 17:42:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.345 17:42:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:07.345 17:42:46 -- common/autotest_common.sh@10 -- # set +x 00:32:07.345 [2024-07-12 17:42:46.127957] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:07.345 [2024-07-12 17:42:46.128013] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.345 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.345 [2024-07-12 17:42:46.244667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.345 [2024-07-12 17:42:46.307819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:07.345 [2024-07-12 17:42:46.308091] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.345 [2024-07-12 17:42:46.308116] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.345 [2024-07-12 17:42:46.308136] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.345 [2024-07-12 17:42:46.308336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:07.345 [2024-07-12 17:42:46.308446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:07.345 [2024-07-12 17:42:46.308566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:07.345 [2024-07-12 17:42:46.308570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:08.278 17:42:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:08.278 17:42:47 -- common/autotest_common.sh@852 -- # return 0 00:32:08.278 17:42:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:08.278 17:42:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:08.278 17:42:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 17:42:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.278 17:42:47 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:08.278 17:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.278 17:42:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 Malloc0 00:32:08.278 17:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.278 17:42:47 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:08.278 17:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.278 17:42:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 [2024-07-12 17:42:47.133374] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.278 17:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.278 17:42:47 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:08.278 17:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.278 17:42:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 17:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.278 17:42:47 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:08.278 17:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.278 17:42:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 17:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.278 17:42:47 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.278 17:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.278 17:42:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 [2024-07-12 17:42:47.165986] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.278 17:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.278 17:42:47 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:08.278 17:42:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.278 17:42:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 17:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.278 17:42:47 -- host/target_disconnect.sh@50 -- # reconnectpid=125020 00:32:08.278 17:42:47 -- host/target_disconnect.sh@52 -- # sleep 2 00:32:08.278 17:42:47 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:08.278 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.259 17:42:49 -- host/target_disconnect.sh@53 -- # kill -9 124731 00:32:10.259 17:42:49 -- host/target_disconnect.sh@55 -- # sleep 2 00:32:10.259 Read completed with error (sct=0, sc=8) 00:32:10.259 starting I/O failed 00:32:10.259 Read completed with error (sct=0, sc=8) 00:32:10.259 starting I/O failed 00:32:10.259 Read completed with error (sct=0, sc=8) 00:32:10.259 starting I/O failed 00:32:10.259 Read completed with error (sct=0, sc=8) 00:32:10.259 starting I/O failed 00:32:10.259 Read completed with error (sct=0, sc=8) 00:32:10.259 starting I/O failed 00:32:10.259 Read completed with error (sct=0, sc=8) 00:32:10.259 starting I/O failed 00:32:10.259 Read completed with error (sct=0, sc=8) 00:32:10.259 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 [2024-07-12 17:42:49.199973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 [2024-07-12 17:42:49.200265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 [2024-07-12 17:42:49.200438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Read completed with error (sct=0, sc=8) 00:32:10.260 starting I/O failed 00:32:10.260 Write completed with error (sct=0, sc=8) 00:32:10.261 starting I/O failed 00:32:10.261 Write completed with error (sct=0, sc=8) 00:32:10.261 starting I/O failed 00:32:10.261 Write completed with error (sct=0, sc=8) 00:32:10.261 starting I/O failed 00:32:10.261 [2024-07-12 17:42:49.200713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:10.261 [2024-07-12 17:42:49.200959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.201193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.201210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.201322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.201520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.201538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.201782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.201950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.201965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.202198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.202426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.202457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.202668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.202865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.202895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.203039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.203194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.203225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.203387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.203609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.203643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.203950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.204087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.204117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.204385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.204590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.204621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.204811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.204942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.204972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.205170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.205364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.205396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.205627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.205764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.205795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.206055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.206189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.206220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.206515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.206647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.206678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.206963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.207083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.207114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.207300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.207543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.207572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.207765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.208015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.208046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.208244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.208483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.208516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.208658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.208792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.208823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.208962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.209158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.209174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.209375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.209487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.209502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.209664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.209826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.209841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.209943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.210036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.210051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.210140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.210291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.210306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.210415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.210586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.210616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.210766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.210889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.210927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.211170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.211299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.211329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.211516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.211624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.211654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.211937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.212066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.212100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.261 qpair failed and we were unable to recover it. 00:32:10.261 [2024-07-12 17:42:49.212247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.261 [2024-07-12 17:42:49.212542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.212573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.212697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.212995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.213024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.213228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.213355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.213387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.213518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.213642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.213672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.213869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.214008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.214038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.214237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.214446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.214477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.214674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.214860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.214890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.215035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.215276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.215306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.215464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.215647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.215677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.215890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.216166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.216196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.216411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.216599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.216629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.216819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.216995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.217025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.217212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.217402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.217433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.217658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.217778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.217808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.218011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.218174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.218203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.218424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.218563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.218593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.218799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.218975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.219005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.219131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.219324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.219355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.219633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.219756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.219786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.219968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.220137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.220172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.220325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.220572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.220602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.220802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.221021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.221051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.221337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.221514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.221544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.221681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.221909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.221939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.222124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.222314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.222345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.222563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.222772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.222802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.222932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.223122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.223151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.223411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.223593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.223622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.223838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.224021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.224053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.224241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.224481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.224511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.224709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.224910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.224939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.262 qpair failed and we were unable to recover it. 00:32:10.262 [2024-07-12 17:42:49.225165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.262 [2024-07-12 17:42:49.225359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.263 [2024-07-12 17:42:49.225390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.263 qpair failed and we were unable to recover it. 00:32:10.263 [2024-07-12 17:42:49.225574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.263 [2024-07-12 17:42:49.225688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.263 [2024-07-12 17:42:49.225717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.263 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.225930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.226176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.226206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.226468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.226593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.226623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.226830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.226930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.226960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.227194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.227439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.227470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.227658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.227847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.227877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.228000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.228291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.228324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.228518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.228626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.228656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.228846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.229104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.229135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.229336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.229472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.229502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.229696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.229966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.229996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.534 qpair failed and we were unable to recover it. 00:32:10.534 [2024-07-12 17:42:49.230214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.230331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.534 [2024-07-12 17:42:49.230361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.230571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.230700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.230730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.230877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.231003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.231033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.231235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.231424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.231454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.231645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.231978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.232007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.232140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.232388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.232419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.232552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.232799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.232828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.232948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.233141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.233172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.233354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.233692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.233726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.233922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.234057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.234087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.234273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.234419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.234450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.234704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.234814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.234843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.235025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.235162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.235192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.235376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.235549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.235577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.235778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.236049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.236079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.236220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.236442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.236473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.236594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.236781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.236811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.237064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.237177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.237206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.237403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.237611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.237640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.237776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.238019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.238048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.238241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.238372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.238401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.238656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.238790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.238819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.238959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.239202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.239232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.239369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.239619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.239649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.239784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.239969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.239997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.240136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.240323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.240356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.240558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.240828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.240857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.241108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.241215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.241249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.241484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.241731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.241760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.241944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.242211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.242241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.242555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.242760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.242790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.535 qpair failed and we were unable to recover it. 00:32:10.535 [2024-07-12 17:42:49.242976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.535 [2024-07-12 17:42:49.243171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.243201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.243418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.243539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.243568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.243770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.244017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.244046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.244229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.244428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.244458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.244668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.244851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.244881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.245134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.245383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.245414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.245627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.245876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.245905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.246049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.246323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.246353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.246480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.246659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.246689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.246836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.247084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.247113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.247365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.247597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.247627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.247886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.248092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.248121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.248342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.248561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.248592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.248788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.249060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.249091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.249350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.249563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.249594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.249741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.249989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.250019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.250315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.250516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.250545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.250739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.250916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.250945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.251195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.251440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.251471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.251619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.251889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.251919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.252066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.252345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.252375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.252677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.252865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.252895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.253156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.253365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.253396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.253527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.253706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.253736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.536 [2024-07-12 17:42:49.253990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.254184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.536 [2024-07-12 17:42:49.254214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.536 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.254497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.254690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.254720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.254861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.255053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.255082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.255196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.255410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.255441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.255585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.255838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.255868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.256074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.256300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.256331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.256447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.256639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.256668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.256853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.257099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.257130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.257331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.257530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.257560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.257760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.257945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.257974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.258320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.258516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.258546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.258676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.258950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.258980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.259121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.259373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.259404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.259677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.259854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.259890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.260118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.260270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.260291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.260471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.260572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.260587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.260683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.260964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.260994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.261198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.261323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.261340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.261491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.261651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.261667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.261823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.262000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.262030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.262242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.262450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.262481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.262669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.262845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.262876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.263155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.263278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.263309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.263454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.263706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.263745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.263888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.263966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.263981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.537 qpair failed and we were unable to recover it. 00:32:10.537 [2024-07-12 17:42:49.264205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.264468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.537 [2024-07-12 17:42:49.264499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.264694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.264823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.264853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.265052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.265282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.265313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.265513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.265646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.265676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.265862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.265993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.266024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.266141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.266332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.266363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.266642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.266850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.266880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.267064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.267198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.267228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.267490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.267683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.267726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.267864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.268042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.268073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.268217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.268406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.268422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.268520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.268735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.268777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.268890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.269025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.269055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.269166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.269419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.269452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.269604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.269740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.269770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.269949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.270069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.270099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.270362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.270583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.270614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.270896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.271077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.271107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.271390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.271607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.271638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.271846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.272039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.272070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.272225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.272312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.272328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.272487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.272678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.272708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.272919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.273064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.273080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.273241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.273447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.273477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.273608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.273715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.273745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.274045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.274242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.274290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.274452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.274674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.274704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.274827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.274959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.274989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.275203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.275396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.275427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.275642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.275849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.275880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.276012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.276212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.538 [2024-07-12 17:42:49.276227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.538 qpair failed and we were unable to recover it. 00:32:10.538 [2024-07-12 17:42:49.276388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.276611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.276626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.276816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.276989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.277005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.277116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.277325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.277357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.277542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.277664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.277694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.277953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.278132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.278147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.278248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.278437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.278453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.278543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.278780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.278811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.279012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.279209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.279239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.279505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.279754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.279784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.279989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.280272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.280304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.280557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.280730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.280759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.280978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.281176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.281207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.281430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.281532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.281562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.281768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.281893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.281924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.282156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.282404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.282436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.282644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.282790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.282820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.283126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.283263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.283295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.283433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.283668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.283699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.283900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.284074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.284105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.284321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.284399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.284414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.284583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.284816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.284846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.285040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.285331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.285362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.285545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.285765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.285795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.539 qpair failed and we were unable to recover it. 00:32:10.539 [2024-07-12 17:42:49.285932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.539 [2024-07-12 17:42:49.286179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.286210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.286414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.286657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.286687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.286829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.286940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.286970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.287147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.287300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.287316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.287536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.287630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.287645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.287743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.287841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.287856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.287955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.288103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.288117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.288279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.288530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.288561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.288753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.288951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.288981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.289183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.289360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.289403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.289504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.289658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.289673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.289902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.289999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.290014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.290215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.290417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.290448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.290699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.290822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.290852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.291038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.291198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.291227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.291365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.291539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.291570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.291820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.291949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.291980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.292202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.292334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.292366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.292618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.292826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.292856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.293106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.293319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.293350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.293486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.293669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.293700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.293992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.294183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.294213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.294358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.294552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.294568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.294738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.294919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.294950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.295081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.295241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.295260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.295507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.295676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.295706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.295960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.296081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.296096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.296196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.296291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.296306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.296486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.296614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.296644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.540 qpair failed and we were unable to recover it. 00:32:10.540 [2024-07-12 17:42:49.296826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.296933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.540 [2024-07-12 17:42:49.296963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.297147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.297368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.297399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.297615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.297830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.297860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.298004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.298122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.298153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.298378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.298557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.298588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.298771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.299015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.299046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.299252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.299503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.299533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.299761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.300013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.300051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.300144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.300308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.300345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.300650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.300756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.300787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.300986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.301179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.301209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.301507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.301615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.301645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.301907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.302112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.302128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.302242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.302522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.302553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.302806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.302928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.302958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.303236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.303408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.303439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.303562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.303836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.303866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.304066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.304172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.304202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.304365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.304488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.304503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.304660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.304809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.304824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.304977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.305250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.305292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.305434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.305551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.305582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.305777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.305970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.306008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.306161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.306404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.306420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.306539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.306805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.306836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.307032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.307210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.307240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.307434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.307611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.307656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.307850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.308136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.308167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.308484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.308595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.308625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.308753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.308953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.308984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.309176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.309315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.541 [2024-07-12 17:42:49.309346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.541 qpair failed and we were unable to recover it. 00:32:10.541 [2024-07-12 17:42:49.309528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.309648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.309663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.309752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.309909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.309924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.310085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.310244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.310299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.310523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.310643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.310674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.310870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.311043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.311058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.311223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.311433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.311464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.311689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.311803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.311833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.311974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.312166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.312196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.312408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.312586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.312616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.312743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.312871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.312902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.313118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.313364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.313395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.313577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.313755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.313786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.314012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.314205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.314220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.314446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.314725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.314755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.314978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.315189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.315219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.315425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.315629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.315660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.315788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.316014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.316044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.316230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.316476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.316507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.316699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.316892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.316923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.317197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.317306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.317338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.317542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.317716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.317746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.317939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.318154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.318183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.318323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.318449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.318464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.318612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.318820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.318850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.318969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.319178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.319208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.319343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.319530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.319565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.319751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.319965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.319995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.320180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.320320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.320351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.542 qpair failed and we were unable to recover it. 00:32:10.542 [2024-07-12 17:42:49.320574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.320819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.542 [2024-07-12 17:42:49.320834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.321020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.321241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.321281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.321424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.321611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.321642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.321781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.321987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.322017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.322218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.322406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.322437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.322611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.322705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.322720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.322887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.323013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.323044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.323227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.323482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.323519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.323714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.323887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.323917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.324115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.324303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.324334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.324584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.324737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.324767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.324964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.325162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.325193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.325373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.325524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.325539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.325799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.325986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.326016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.326192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.326338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.326354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.326512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.326619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.326634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.326858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.326986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.327001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.327109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.327266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.327307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.327505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.327704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.327734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.327961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.328164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.328194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.328386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.328480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.328496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.328667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.328917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.328947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.329141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.329418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.329449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.329671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.329949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.329965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.330058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.330226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.330277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.543 qpair failed and we were unable to recover it. 00:32:10.543 [2024-07-12 17:42:49.330555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.543 [2024-07-12 17:42:49.330810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.330840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.330966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.331141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.331157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.331377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.331552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.331588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.331871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.332072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.332104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.332288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.332536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.332567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.332763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.333009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.333038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.333233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.333516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.333547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.333680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.333980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.334010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.334212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.334403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.334434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.334641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.334860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.334875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.335033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.335148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.335163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.335263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.335353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.335368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.335525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.335618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.335633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.335874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.335992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.336023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.336234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.336459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.336490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.336627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.336765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.336795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.336932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.337145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.337176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.337389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.337643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.337674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.337881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.338001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.338031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.338273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.338406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.338437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.338716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.338942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.338972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.339121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.339275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.339291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.339492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.339591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.339621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.339775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.339972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.340003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.340198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.340318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.340349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.340489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.340755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.340787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.342164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.342290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.342309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.342463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.342641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.342671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.342876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.343079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.343109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.343305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.343485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.343516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.544 qpair failed and we were unable to recover it. 00:32:10.544 [2024-07-12 17:42:49.343729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.544 [2024-07-12 17:42:49.343983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.344014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.344287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.344437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.344468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.344730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.344838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.344854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.345085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.345192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.345222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.345424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.345612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.345642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.345844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.346056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.346087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.346236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.346342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.346358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.346460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.346680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.346711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.346987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.347179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.347210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.347368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.347497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.347527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.347751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.347910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.347925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.348019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.348197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.348228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.348440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.348630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.348661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.348947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.349067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.349097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.349241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.349442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.349473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.349667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.349937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.349967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.350150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.350269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.350300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.350525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.350745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.350776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.351030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.351240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.351261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.351516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.351681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.351711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.351852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.352034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.352064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.352273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.352442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.352472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.352602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.352849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.352879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.353038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.353227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.353270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.353442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.353594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.353626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.353820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.353951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.353982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.354246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.354373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.354389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.354631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.354736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.354766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.354878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.354991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.355023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.355182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.355461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.355492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.355685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.355867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.545 [2024-07-12 17:42:49.355898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.545 qpair failed and we were unable to recover it. 00:32:10.545 [2024-07-12 17:42:49.356095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.356277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.356308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.356492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.356624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.356638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.356921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.357126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.357156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.357459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.357619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.357634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.357790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.357933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.357964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.358158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.358349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.358365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.358611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.358722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.358738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.358894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.358995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.359010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.359192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.359467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.359498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.359636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.359836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.359866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.360054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.360240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.360297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.360483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.360664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.360694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.360888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.361162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.361191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.361442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.361568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.361599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.361791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.361911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.361941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.362194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.362380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.362412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.362725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.362974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.363005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.363184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.363385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.363400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.363620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.363713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.363729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.363839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.363922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.363938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.364084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.364272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.364303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.364431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.364558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.364588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.364785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.364964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.364994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.365109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.365297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.365328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.365508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.365707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.365738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.365935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.366207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.366237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.366383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.366563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.366578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.366774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.366933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.366948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.367105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.367276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.367308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.367562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.367703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.367734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.546 qpair failed and we were unable to recover it. 00:32:10.546 [2024-07-12 17:42:49.367916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.368043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.546 [2024-07-12 17:42:49.368074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.368289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.368474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.368490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.368650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.368849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.368880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.368999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.369112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.369142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.369348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.369517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.369532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.369706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.369946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.369962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.370142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.370249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.370270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.370440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.370683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.370714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.370876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.371056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.371086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.371203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.371469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.371501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.371697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.371822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.371852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.372043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.372224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.372266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.372460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.372733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.372763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.372953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.373230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.373272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.373463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.373697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.373729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.373867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.374086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.374115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.374387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.374606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.374636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.374767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.374892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.374922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.375105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.375291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.375324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.375435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.375708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.375738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.375932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.376169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.376198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.376316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.376511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.376541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.376655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.376797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.376827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.376952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.377089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.377118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.377225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.377389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.377426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.377564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.377828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.377859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.378062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.379055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.379085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.379277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.379536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.379567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.379849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.380032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.380063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.380274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.380408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.380439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.380598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.380680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.547 [2024-07-12 17:42:49.380696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.547 qpair failed and we were unable to recover it. 00:32:10.547 [2024-07-12 17:42:49.380857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.381071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.381101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.381317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.381510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.381526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.381674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.381760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.381777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.381964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.382134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.382164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.382369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.382479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.382509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.382722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.382881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.382898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.382998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.383187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.383202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.383285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.383409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.383439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.383561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.383763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.383793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.383906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.384154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.384185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.384374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.384459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.384474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.384563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.384756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.384791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.384903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.385025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.385054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.385309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.385584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.385617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.385800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.385986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.386016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.386137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.386269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.386299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.386486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.386669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.386684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.386796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.386930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.386960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.387083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.387199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.387230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.387425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.387543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.387559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.387761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.387923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.387954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.388087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.388279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.388317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.388453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.388603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.388618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.388764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.388949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.388979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.389181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.389367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.389397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.548 qpair failed and we were unable to recover it. 00:32:10.548 [2024-07-12 17:42:49.389596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.548 [2024-07-12 17:42:49.389876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.389906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.390094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.390235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.390251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.390475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.390651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.390666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.390805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.390884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.390899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.391070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.391185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.391215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.391383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.391497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.391526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.391636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.391821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.391857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.392006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.392207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.392237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.392388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.392665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.392695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.392877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.393003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.393033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.393224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.393363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.393378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.393548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.393722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.393752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.393967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.394151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.394180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.394306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.394387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.394402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.394501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.394696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.394726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.394913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.395094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.395126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.395317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.395529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.395559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.395755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.395904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.395946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.396171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.396351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.396382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.396565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.397486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.397512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.397697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.397794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.397809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.397960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.398066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.398103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.398297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.398414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.398443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.398575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.398698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.398727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.398856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.398977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.399006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.399113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.399297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.399327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.399551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.399746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.399762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.399897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.400146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.400177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.400374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.400476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.400491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.400739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.400884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.400900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.401060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.401245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.549 [2024-07-12 17:42:49.401288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.549 qpair failed and we were unable to recover it. 00:32:10.549 [2024-07-12 17:42:49.401419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.401536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.401566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.401703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.401783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.401798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.402028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.402150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.402178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.402292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.402477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.402507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.402705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.402952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.402982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.403104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.403301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.403331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.403465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.403649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.403679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.403821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.403940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.403969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.404104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.404224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.404278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.404476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.404666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.404680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.404905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.405081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.405112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.405271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.405394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.405423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.405537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.405633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.405650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.405822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.405935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.405965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.406177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.406428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.406459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.406597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.406843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.406859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.406945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.407100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.407115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.407294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.407391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.407407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.407504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.407614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.407629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.407716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.407916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.407932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.408119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.408308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.408338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.408645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.408877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.408907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.409120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.409231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.409247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.409413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.409495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.409510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.409677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.409787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.409818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.409961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.410235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.410290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.410550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.410658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.410687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.410890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.411012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.411042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.411351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.411543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.411573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.411780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.412009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.412039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.550 qpair failed and we were unable to recover it. 00:32:10.550 [2024-07-12 17:42:49.412298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.550 [2024-07-12 17:42:49.412476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.412505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.412725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.412857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.412887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.413080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.413188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.413217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.413340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.413613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.413644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.413768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.413890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.413920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.414132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.414433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.414467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.414675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.414788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.414817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.414946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.415054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.415083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.415285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.415404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.415434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.415615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.415771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.415801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.415997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.416193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.416224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.416447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.416597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.416612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.416762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.416858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.416873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.417100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.417347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.417379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.417552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.417710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.417739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.417924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.418241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.418279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.418425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.419811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.419838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.420021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.420278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.420294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.420378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.420622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.420652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.420794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.420994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.421024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.421224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.421343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.421373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.421625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.421864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.421895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.422101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.422279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.422311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.422516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.422816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.422846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.422974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.423086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.423116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.423271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.423452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.423467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.423551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.423756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.423772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.423889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.424066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.424096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.424378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.424495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.424525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.424713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.424855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.424883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.551 [2024-07-12 17:42:49.425024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.425169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.551 [2024-07-12 17:42:49.425198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.551 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.425399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.425653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.425684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.425901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.426066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.426096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.426355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.426550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.426579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.426760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.426877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.426907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.427129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.427313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.427343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.427535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.427665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.427706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.427898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.428846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.428873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.428986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.429156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.429187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.429470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.430535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.430564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.430761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.430917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.430932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.431087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.431235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.431278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.431461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.431630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.431645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.431766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.431864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.431878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.431979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.432163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.432179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.432277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.432359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.432374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.432482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.432674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.432706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.432903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.433085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.433114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.433225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.433511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.433542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.433759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.433973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.434003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.434226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.434450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.434480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.434616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.434808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.434839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.434965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.435077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.435106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.435301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.435507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.435538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.435666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.435938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.435969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.436163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.436323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.436357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.552 qpair failed and we were unable to recover it. 00:32:10.552 [2024-07-12 17:42:49.436493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.552 [2024-07-12 17:42:49.436638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.436667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.436786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.436898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.436927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.437050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.437229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.437265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.437470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.437661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.437691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.437820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.438011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.438040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.438172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.438287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.438317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.438508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.438626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.438641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.438827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.438944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.438973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.439157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.439338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.439369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.439568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.439761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.439790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.439921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.440215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.440246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.440385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.440602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.440632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.440825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.441009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.441038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.441158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.441277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.441307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.441560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.441738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.441768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.441906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.442018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.442048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.442184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.442309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.442326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.442405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.442702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.442733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.442933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.443130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.443161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.443351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.443459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.443488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.443691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.443809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.443845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.443971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.444083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.444114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.444325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.444576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.444606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.444721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.444860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.444890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.445016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.445216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.445246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.445383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.445499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.445514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.445612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.445706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.445736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.445875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.445983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.446012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.446197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.446399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.446415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.446497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.446657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.446673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.553 qpair failed and we were unable to recover it. 00:32:10.553 [2024-07-12 17:42:49.446849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.553 [2024-07-12 17:42:49.446958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.446993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.447278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.447398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.447413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.447578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.447741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.447756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.447921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.448043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.448072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.448262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.448476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.448506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.448687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.448935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.448963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.449176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.449294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.449325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.449535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.449661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.449676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.449769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.449926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.449941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.450022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.450207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.450237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.451310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.451634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.451654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.451827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.451909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.451923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.452013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.452096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.452110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.452309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.452471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.452487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.452579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.452757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.452773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.452895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.453091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.453121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.453327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.453437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.453466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.453649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.453895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.453912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.454169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.454278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.454295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.454401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.454478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.454494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.454610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.454764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.454783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.455032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.455133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.455148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.455251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.455334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.455349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.455462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.455549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.455564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.455650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.455844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.455859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.455962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.456910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.456936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.457130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.457211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.457226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.457337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.457520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.457535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.457717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.457874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.457889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.458053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.458140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.458156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.458310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.458407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.554 [2024-07-12 17:42:49.458423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.554 qpair failed and we were unable to recover it. 00:32:10.554 [2024-07-12 17:42:49.458621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.458766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.458782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.458884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.459035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.459050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.459197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.459365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.459380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.459562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.459659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.459674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.459771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.459864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.459878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.459983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.460070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.460085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.460281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.460380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.460410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.460606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.460723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.460754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.460955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.461077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.461106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.461235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.461371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.461402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.461590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.461669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.461684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.461770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.461950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.461965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.462121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.462266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.462282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.462445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.462600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.462615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.462707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.462802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.462819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.463006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.463160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.463176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.463338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.463444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.463459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.463620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.463777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.463791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.463891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.463982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.463997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.464182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.464276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.464319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.464530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.464655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.464685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.464833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.465030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.465059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.465187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.465303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.465332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.465528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.465791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.465806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.465900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.465990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.466004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.466184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.466280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.466295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.466493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.466606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.466620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.466781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.466950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.466979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.468313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.468447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.468465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.468742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.468898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.468930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.555 qpair failed and we were unable to recover it. 00:32:10.555 [2024-07-12 17:42:49.469067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.469366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.555 [2024-07-12 17:42:49.469398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.469665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.469793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.469822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.470006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.470125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.470153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.470356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.470558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.470588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.470778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.470976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.471004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.472281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.472467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.472484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.472576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.472727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.472742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.474060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.474189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.474204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.474295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.474451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.474466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.474695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.474869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.474899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.475116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.475240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.475278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.475532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.475647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.475678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.475804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.475911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.475947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.476043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.476200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.476215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.476316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.476419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.476433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.476578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.476667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.476682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.476795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.476893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.476908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.477001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.477908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.477933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.478188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.478282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.478298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.478553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.478666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.478695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.478827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.478968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.478998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.479127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.479248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.479291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.479436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.479632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.479662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.479859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.479980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.480010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.480146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.480252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.480288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.481310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.481533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.556 [2024-07-12 17:42:49.481551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.556 qpair failed and we were unable to recover it. 00:32:10.556 [2024-07-12 17:42:49.481657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.481899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.481914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.482017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.482178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.482194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.482345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.482443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.482459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.482548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.482639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.482654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.482789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.482886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.482901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.483004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.483102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.483133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.483290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.483406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.483445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.483664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.483847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.483863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.483965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.484119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.484135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.484298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.484455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.484471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.484563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.484642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.484658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.484752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.484842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.484857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.485015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.485122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.485138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.485292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.485453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.485498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.485649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.485825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.485854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.486051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.486205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.486221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.486471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.486715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.486731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.486904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.487096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.487111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.487286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.487441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.487458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.487566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.487732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.487748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.487847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.488017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.488033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.488193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.488286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.488301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.488399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.488571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.488587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.488672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.488904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.488920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.557 [2024-07-12 17:42:49.489139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.489390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.557 [2024-07-12 17:42:49.489407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.557 qpair failed and we were unable to recover it. 00:32:10.836 [2024-07-12 17:42:49.489649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.836 [2024-07-12 17:42:49.489891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.836 [2024-07-12 17:42:49.489908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.836 qpair failed and we were unable to recover it. 00:32:10.836 [2024-07-12 17:42:49.489988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.836 [2024-07-12 17:42:49.490060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.836 [2024-07-12 17:42:49.490076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.836 qpair failed and we were unable to recover it. 00:32:10.836 [2024-07-12 17:42:49.490229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.836 [2024-07-12 17:42:49.490310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.490325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.490419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.490582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.490598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.490752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.490918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.490934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.491010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.491107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.491122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.491203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.491364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.491380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.491476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.491583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.491599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.491838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.492055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.492071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.492152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.492318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.492333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.492603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.492819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.492834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.492929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.493121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.493443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.493662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.493843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.493949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.494101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.494214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.494229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.494334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.494518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.494533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.494685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.494830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.494845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.494996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.495219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.495234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.495319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.495409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.495424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.495527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.495677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.495691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.495791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.495884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.495899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.496144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.496295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.496310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.496476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.496634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.496648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.496802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.496880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.496895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.497050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.497195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.497209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.497370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.497474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.497490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.497585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.497695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.497710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.497794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.497888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.497903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.498082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.498305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.498321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.498416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.498569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.498585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.837 qpair failed and we were unable to recover it. 00:32:10.837 [2024-07-12 17:42:49.498668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.837 [2024-07-12 17:42:49.498813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.498829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.498944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.499024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.499038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.499116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.499320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.499336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.499495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.499602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.499617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.499781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.499972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.499988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.500150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.500227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.500242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.500400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.500549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.500569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.500746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.500919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.500937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.501104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.501194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.501209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.501478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.501728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.501744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.501909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.501991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.502006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.502088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.502167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.502183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.502277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.502454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.502470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.502629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.502812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.502828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.502929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.503145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.503160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.503324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.503408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.503423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.503643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.503728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.503744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.503908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.504069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.504088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.504276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.504436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.504450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.504602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.504759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.504775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.504940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.505141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.505157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.505329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.505411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.505427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.505596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.505748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.505763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.505878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.505962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.505977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.506235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.506402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.506419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.506515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.506676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.506691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.506797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.506879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.506893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.507051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.507133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.507150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.507248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.507421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.507436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.507616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.507708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.507723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.507810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.507908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.838 [2024-07-12 17:42:49.507923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.838 qpair failed and we were unable to recover it. 00:32:10.838 [2024-07-12 17:42:49.508016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.508185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.508200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.508291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.508370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.508384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.508529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.508621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.508636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.508743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.508905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.508920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.508999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.509236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.509251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.509451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.509603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.509619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.509719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.509870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.509887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.509977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.510069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.510085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.510173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.510419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.510435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.510512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.510755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.510769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.510854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.511007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.511022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.511167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.511261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.511276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.511471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.511616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.511631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.511852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.511946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.511960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.512111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.512265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.512281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.512386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.512551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.512565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.512718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.512893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.512907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.513013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.513110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.513126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.513219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.513407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.513424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.513676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.513757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.513771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.514016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.514119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.514133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.514303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.514472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.514488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.514580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.514735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.514750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.514895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.514990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.515005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.515106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.515179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.515196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.515417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.515512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.515527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.515619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.515763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.515779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.515933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.516079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.516093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.516168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.516264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.516278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.516448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.516666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.516681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.839 qpair failed and we were unable to recover it. 00:32:10.839 [2024-07-12 17:42:49.516828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.516936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.839 [2024-07-12 17:42:49.516951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.517121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.517373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.517388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.517610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.517872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.517888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.518049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.518216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.518230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.518327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.518412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.518427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.518676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.518847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.518877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.518997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.519176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.519207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.519373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.519609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.519625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.519779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.519926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.519941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.520205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.520310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.520327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.520496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.520672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.520701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.520886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.521013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.521045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.521237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.521429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.521458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.521705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.521915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.521931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.522084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.522279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.522294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.522544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.522700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.522716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.522894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.522999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.523013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.523125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.523349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.523365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.523537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.523638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.523654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.524603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.524817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.524834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.525027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.525285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.525301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.525526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.525680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.525694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.525855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.525946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.525962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.526074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.526173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.526188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.526335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.526503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.526519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.526736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.526898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.526912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.526993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.527155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.527170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.527321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.527482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.527512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.527657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.527790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.527821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.527974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.528085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.528100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.528311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.528410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.528425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.840 qpair failed and we were unable to recover it. 00:32:10.840 [2024-07-12 17:42:49.528644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.840 [2024-07-12 17:42:49.528743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.528757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.528914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.529119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.529389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.529634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.529831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.529935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.530060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.530194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.530224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.530467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.530658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.530693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.530841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.531040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.531069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.531324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.531493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.531505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.531682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.531788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.531798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.531892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.531968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.531977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.532067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.532168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.532178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.532363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.532450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.532461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.532632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.532706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.532716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.532802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.532943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.532953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.533048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.533282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.533300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.533401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.533479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.533492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.533655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.533800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.533830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.534017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.534139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.534168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.534354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.534541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.534572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.534820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.534922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.534936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.535083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.535305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.535320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.535515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.535669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.535685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.841 [2024-07-12 17:42:49.535784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.535999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.841 [2024-07-12 17:42:49.536015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.841 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.536120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.536311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.536325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.536419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.536582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.536598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.536679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.536761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.536775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.536854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.537007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.537022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.537138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.537237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.537252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.537381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.537623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.537638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.537804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.537892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.537908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.537997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.538139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.538155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.538298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.538391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.538405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.538586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.538804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.538820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.538931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.539080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.539095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.539193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.539284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.539300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.539378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.539464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.539479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.539561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.539771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.539786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.539876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.540061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.540077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.540228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.540502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.540517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.540597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.540673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.540687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.540905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.540996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.541093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.541292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.541589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.541854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.541971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.542086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.542177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.542192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.542277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.542441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.542457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.542545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.542691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.542707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.542859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.543075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.543090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.543245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.543417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.543433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.543530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.543676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.543691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.543878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.544025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.544040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.544146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.544246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.842 [2024-07-12 17:42:49.544267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.842 qpair failed and we were unable to recover it. 00:32:10.842 [2024-07-12 17:42:49.544366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.544518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.544534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.544622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.544785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.544801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.544960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.545208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.545403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.545598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.545796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.545894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.546073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.546168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.546182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.546284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.546365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.546380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.546464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.546611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.546625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.546733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.546812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.546827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.546906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.547070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.547088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.547167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.547264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.547280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.547434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.547680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.547696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.547795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.547955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.547970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.548054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.548199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.548214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.548315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.548540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.548555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.548662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.548747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.548760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.548902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.549003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.549017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.549265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.549433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.549449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.549536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.549786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.549801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.549969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.550135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.550153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.550313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.550537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.550552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.550645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.550749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.550764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.550850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.550949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.550964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.551048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.551195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.551211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.551304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.551405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.551420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.551566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.551718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.551734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.551979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.552094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.552109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.552194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.552293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.552308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.843 qpair failed and we were unable to recover it. 00:32:10.843 [2024-07-12 17:42:49.552388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.552478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.843 [2024-07-12 17:42:49.552492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.552660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.552756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.552774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.552921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.553148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.553163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.553328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.553494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.553510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.553602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.553748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.553763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.553979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.554069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.554084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.554261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.554432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.554448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.554536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.554698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.554713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.554931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.555019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.555034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.555133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.555337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.555352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.555453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.555542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.555557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.555668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.555825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.555842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.556059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.556205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.556220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.556372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.556526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.556541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.556636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.556823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.556839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.556944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.557135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.557150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.557317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.557395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.557409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.557627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.557808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.557823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.557981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.558142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.558157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.558395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.558598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.558614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.558715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.558816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.558831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.558930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.559085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.559100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.559210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.559295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.559311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.559406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.559633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.559648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.559799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.559977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.559992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.560182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.560448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.560464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.560615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.560785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.560799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.560901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.561118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.561133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.561287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.561460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.561475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.561642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.561859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.561875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.562103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.562217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.562232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.844 qpair failed and we were unable to recover it. 00:32:10.844 [2024-07-12 17:42:49.562335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.844 [2024-07-12 17:42:49.562485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.562500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.562654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.562869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.562884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.563065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.563179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.563194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.563349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.563567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.563583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.563675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.563825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.563840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.563935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.564092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.564107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.564265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.564426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.564441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.564625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.564763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.564778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.564925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.565003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.565018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.565102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.565283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.565299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.565467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.565545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.565561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.565653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.565869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.565885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.566098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.566193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.566209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.566361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.566455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.566470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.566568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.566714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.566729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.566838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.566991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.567006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.567155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.567238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.567278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.567436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.567616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.567632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.567718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.567866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.567881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.567980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.568148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.568164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.568318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.568582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.568597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.568680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.568771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.568787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.568867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.568950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.568965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.569118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.569311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.569326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.569478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.569555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.569569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.569764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.569933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.569948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.845 [2024-07-12 17:42:49.570125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.570238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.845 [2024-07-12 17:42:49.570253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.845 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.570407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.570496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.570511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.570667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.570763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.570778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.570893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.571121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.571136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.571297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.571453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.571468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.571565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.571661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.571677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.571772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.571988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.572004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.572108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.572223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.572239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.572360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.572554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.572569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.572658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.572757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.572772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.572929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.573020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.573035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.573124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.573222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.573238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.573333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.573589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.573604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.573693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.573839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.573854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.573942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.574035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.574050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.574200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.574415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.574431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.574591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.574680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.574694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.574864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.575040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.575055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.575295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.575446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.575462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.575544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.575690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.575705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.575791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.575878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.575893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.575992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.576241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.576261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.576359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.576472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.576487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.576646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.576738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.576753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.576844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.576940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.576954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.577125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.577268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.577283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.577368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.577448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.577462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.577727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.577874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.577889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.577989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.578135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.578150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.578302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.578408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.578423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.578598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.578685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.578700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.846 qpair failed and we were unable to recover it. 00:32:10.846 [2024-07-12 17:42:49.578797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.578953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.846 [2024-07-12 17:42:49.578969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.579129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.579298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.579312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.579472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.579622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.579637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.579754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.580025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.580040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.580197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.580284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.580299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.580458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.580605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.580620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.580732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.580821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.580835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.581024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.581118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.581133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.581311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.581464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.581480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.581564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.581708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.581723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.581943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.582213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.582227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.582506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.582673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.582688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.582785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.583032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.583046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.583216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.583372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.583388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.583547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.583715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.583730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.584050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.584126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.584141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.584328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.584558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.584573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.584677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.584759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.584773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.584891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.584979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.584994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.585168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.585265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.585281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.585430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.585517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.585532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.585683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.585844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.585859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.586081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.586233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.586248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.586357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.586524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.586540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.586629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.586805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.586821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.586923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.587140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.587154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.587273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.587442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.587458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.587542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.587781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.587798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.587965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.588142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.588157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.588238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.588320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.588335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.588509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.588716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.588731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.847 qpair failed and we were unable to recover it. 00:32:10.847 [2024-07-12 17:42:49.588899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.847 [2024-07-12 17:42:49.588975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.588990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.589154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.589370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.589387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.589537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.589763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.589779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.589926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.590165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.590180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.590337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.590417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.590433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.590652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.590882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.590897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.591116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.591211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.591226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.591494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.591665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.591680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.591839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.591995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.592009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.592101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.592370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.592385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.592564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.592808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.592824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.592974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.593213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.593229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.593482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.593647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.593663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.593854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.594024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.594040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.594189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.594339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.594355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.594575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.594814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.594830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.595068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.595268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.595283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.595447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.595607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.595622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.595780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.595954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.595969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.596048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.596262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.596278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.596513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.596661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.596676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.596858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.597013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.597029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.597178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.597334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.597349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.597495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.597655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.597672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.597832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.597935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.597950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.598046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.598236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.598251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.598494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.598603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.598618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.598715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.598802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.598818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.598985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.599078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.599093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.599277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.599458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.599473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.599646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.599721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.848 [2024-07-12 17:42:49.599736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.848 qpair failed and we were unable to recover it. 00:32:10.848 [2024-07-12 17:42:49.599832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.600077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.600092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.600311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.600519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.600533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.600634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.600732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.600750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.600946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.601115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.601130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.601327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.601499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.601514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.601666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.601765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.601780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.601951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.602102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.602118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.602302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.602420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.602435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.602611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.602700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.602716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.602896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.602977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.602992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.603084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.603165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.603180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.603279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.603443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.603458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.603654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.603873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.603890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.603990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.604080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.604095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.604369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.604555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.604570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.604655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.604872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.604887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.605101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.605336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.605352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.605598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.605764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.605779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.605862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.606030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.606045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.606205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.606304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.606320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.606490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.606728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.606743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.606901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.607048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.607063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.607214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.607307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.607326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.607485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.607586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.607602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.607690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.607771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.607785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.607948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.608092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.608107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.849 qpair failed and we were unable to recover it. 00:32:10.849 [2024-07-12 17:42:49.608207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.849 [2024-07-12 17:42:49.608316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.608331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.608481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.608641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.608656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.608903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.609148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.609163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.609331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.609482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.609497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.609579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.609661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.609676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.609898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.609993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.610008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.610229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.610448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.610463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.610615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.610771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.610786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.610866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.611025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.611039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.611205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.611351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.611367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.611455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.611695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.611710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.611810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.611905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.611923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.612071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.612225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.612241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.612415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.612655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.612671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.612759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.612921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.612936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.613085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.613336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.613351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.613614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.613762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.613777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.613875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.613979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.613994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.614098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.614188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.614204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.614309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.614497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.614512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.614663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.614830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.614845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.614923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.615138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.615153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.615408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.615555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.615570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.615667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.615761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.615776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.615942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.616099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.616114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.616229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.616310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.616325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.616502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.616665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.616681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.616903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.617084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.617099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.617183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.617382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.617398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.617512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.617596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.617610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.617785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.617939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.850 [2024-07-12 17:42:49.617954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.850 qpair failed and we were unable to recover it. 00:32:10.850 [2024-07-12 17:42:49.618149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.618313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.618328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.618430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.618517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.618532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.618803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.618907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.618922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.619038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.619145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.619160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.619251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.619418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.619433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.619526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.619831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.619846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.620001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.620151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.620166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.620318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.620475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.620490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.620598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.620694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.620709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.620867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.621084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.621099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.621344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.621453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.621468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.621567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.621725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.621740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.621836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.622020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.622035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.622112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.622341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.622357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.622509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.622671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.622686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.622936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.623012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.623026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.623111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.623276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.623291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.623452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.623543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.623558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.623812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.623911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.623926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.624081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.624182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.624197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.624307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.624438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.624453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.624646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.624796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.624812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.624965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.625131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.625146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.625309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.625449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.625464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.625567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.625677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.625692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.625862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.625970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.625985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.626087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.626189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.626204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.626387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.626532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.626545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.626633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.626780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.626793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.626871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.627066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.627078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.851 [2024-07-12 17:42:49.627237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.627336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.851 [2024-07-12 17:42:49.627349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.851 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.627565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.627735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.627747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.627841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.627940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.627953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.628141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.628282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.628296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.628386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.628541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.628554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.628668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.628744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.628757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.628917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.629142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.629152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.629246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.629358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.629367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.629454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.629551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.629560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.629669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.629813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.629821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.629959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.630108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.630117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.630200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.630343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.630352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.630439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.630543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.630552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.630646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.630824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.630833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.630994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.631097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.631107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.631194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.631278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.631288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.631482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.631571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.631580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.631813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.631979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.631988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.632127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.632214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.632223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.632379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.632527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.632536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.632619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.632755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.632764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.632916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.633009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.633019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.633093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.633239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.633248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.633405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.633579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.633588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.633692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.633778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.633787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.633967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.634117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.634128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.634332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.634480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.634491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.634650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.634785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.634795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.634881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.635033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.635043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.635116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.635320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.635331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.635426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.635539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.635549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.852 qpair failed and we were unable to recover it. 00:32:10.852 [2024-07-12 17:42:49.635708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.852 [2024-07-12 17:42:49.635797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.635807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.635957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.636196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.636207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.636368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.636465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.636475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.636657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.636807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.636818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.636973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.637161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.637407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.637653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.637876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.637974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.638161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.638351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.638363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.638504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.638591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.638602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.638743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.638883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.638894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.639055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.639124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.639135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.639304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.639394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.639405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.639566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.639648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.639658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.639728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.639981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.639991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.640070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.640225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.640235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.640395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.640602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.640613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.640696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.640767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.640777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.640989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.641057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.641067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.641204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.641279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.641289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.641394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.641482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.641493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.641573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.641709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.641719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.641797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.642115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.642420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.642662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.642848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.642926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.643014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.643075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.643084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.643160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.643312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.643339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.853 [2024-07-12 17:42:49.643418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.643494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.853 [2024-07-12 17:42:49.643516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.853 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.643598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.643766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.643776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.644008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.644077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.644087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.644261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.644338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.644349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.644450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.644618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.644627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.644769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.644851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.644865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.644947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.645085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.645095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.645189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.645283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.645294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.645377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.645518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.645528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.645742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.645811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.645822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.645960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.646252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.646408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.646599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.646829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.646901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.646992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.647089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.647101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.647242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.647457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.647467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.647623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.647778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.647787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.647947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.648033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.648043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.648253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.648465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.648476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.648619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.648826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.648836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.649045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.649267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.649278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.649445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.649589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.649599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.649672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.649738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.649748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.649840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.649919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.649930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.650005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.650191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.650206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.650347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.650438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.650449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.854 qpair failed and we were unable to recover it. 00:32:10.854 [2024-07-12 17:42:49.650526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.854 [2024-07-12 17:42:49.650656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.650667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.650809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.650950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.650962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.651132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.651219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.651230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.651384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.651597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.651608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.651758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.652025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.652036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.652128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.652275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.652287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.652501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.652572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.652582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.652844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.652994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.653004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.653166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.653302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.653313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.653539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.653614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.653624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.653703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.653910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.653921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.654059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.654216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.654227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.654314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.654481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.654493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.654598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.654738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.654748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.654899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.655136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.655146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.655325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.655411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.655421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.655641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.655800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.655810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.656032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.656186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.656197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.656274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.656359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.656381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.656546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.656644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.656656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.656828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.656996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.657007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.657153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.657264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.657275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.657385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.657475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.657486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.657555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.657721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.657733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.657883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.658032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.658043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.658127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.658287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.658299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.658403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.658571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.658582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.658724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.658864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.658874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.855 qpair failed and we were unable to recover it. 00:32:10.855 [2024-07-12 17:42:49.658967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.855 [2024-07-12 17:42:49.659125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.659135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.659289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.659391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.659402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.659544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.659640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.659651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.659818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.659959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.659971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.660117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.660199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.660210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.660405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.660572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.660583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.660675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.660899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.660909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.660992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.661071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.661082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.661158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.661316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.661328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.661479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.661639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.661650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.661860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.661948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.661959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.662041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.662179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.662189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.662368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.662512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.662524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.662602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.662760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.662772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.663009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.663095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.663105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.663197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.663277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.663289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.663498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.663716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.663727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.663802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.663940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.663950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.664050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.664119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.664130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.664346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.664498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.664509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.664601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.664754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.664764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.664902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.665053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.665064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.665145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.665371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.665382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.665614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.665784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.665794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.666006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.666085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.666096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.666238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.666478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.666489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.666630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.666841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.666852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.666948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.667097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.667107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.667292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.667489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.667501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.667683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.667891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.667901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.856 [2024-07-12 17:42:49.668054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.668148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.856 [2024-07-12 17:42:49.668159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.856 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.668301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.668446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.668459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.668613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.668764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.668775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.668863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.669013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.669024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.669251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.669435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.669445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.669518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.669667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.669678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.669830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.669992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.670002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.670237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.670401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.670412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.670507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.670596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.670606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.670670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.670841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.670853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.670939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.671071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.671081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.671230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.671401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.671412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.671510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.671761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.671771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.671844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.672019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.672029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.672108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.672195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.672206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.672386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.672487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.672499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.672642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.672743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.672754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.672918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.673011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.673022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.673235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.673319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.673330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.673406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.673549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.673560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.673706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.673801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.673811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.673956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.674118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.674128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.674335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.674554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.674565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.674809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.674966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.674977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.675212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.675295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.675306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.675395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.675629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.675640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.675795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.675894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.675905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.676047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.676144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.676155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.676298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.676370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.676382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.857 qpair failed and we were unable to recover it. 00:32:10.857 [2024-07-12 17:42:49.676483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.676583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.857 [2024-07-12 17:42:49.676595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.676695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.676798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.676809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.676943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.677090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.677101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.677193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.677403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.677414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.677558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.677744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.677755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.677828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.677907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.677916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.678003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.678157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.678169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.678339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.678445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.678455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.678542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.678610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.678622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.678778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.678917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.678928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.679086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.679189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.679199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.679272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.679484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.679495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.679582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.679691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.679702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.679775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.680020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.680032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.680115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.680222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.680233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.680388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.680545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.680556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.680653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.680742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.680752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.680917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.681059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.681070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.681142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.681375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.681387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.681596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.681747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.681759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.681999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.682143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.682154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.682385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.682632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.682644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.682731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.682874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.682886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.683057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.683148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.683159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.683318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.683460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.683471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.683611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.683696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.858 [2024-07-12 17:42:49.683708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.858 qpair failed and we were unable to recover it. 00:32:10.858 [2024-07-12 17:42:49.683783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.683919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.683930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.684081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.684222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.684233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.684440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.684529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.684539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.684638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.684732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.684743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.685006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.685156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.685167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.685322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.685393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.685404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.685544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.685681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.685691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.685771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.685844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.685854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.686000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.686173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.686184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.686327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.686500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.686511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.686602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.686837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.686849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.686998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.687098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.687109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.687284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.687433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.687445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.687585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.687850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.687860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.687931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.688074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.688085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.688172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.688313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.688323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.688414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.688540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.688554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.688651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.688734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.688745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.688883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.689037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.689048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.689200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.689393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.689404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.689498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.689706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.689717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.689876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.689959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.689970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.690180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.690249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.690265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.690417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.690485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.690495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.690668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.690771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.690781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.690877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.690967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.690978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.691114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.691272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.691285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.691426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.691583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.691593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.691736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.691872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.691882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.692017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.692177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.692187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.859 qpair failed and we were unable to recover it. 00:32:10.859 [2024-07-12 17:42:49.692368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.692448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.859 [2024-07-12 17:42:49.692460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.692524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.692665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.692676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.692764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.692858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.692869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.692960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.693126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.693138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.693295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.693406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.693416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.693519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.693679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.693689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.693778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.693848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.693860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.694013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.694100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.694110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.694252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.694332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.694343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.694550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.694706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.694716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.694789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.694925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.694935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.695089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.695190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.695200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.695283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.695427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.695438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.695519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.695655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.695665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.695829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.696098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.696265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.696572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.696838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.696926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.697007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.697077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.697087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.697298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.697378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.697389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.697535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.697627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.697639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.697819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.697970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.697980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.698128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.698227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.698237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.698382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.698605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.698615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.698698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.698925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.698935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.699078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.699312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.699323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.699408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.699563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.699572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.699664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.699749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.860 [2024-07-12 17:42:49.699759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.860 qpair failed and we were unable to recover it. 00:32:10.860 [2024-07-12 17:42:49.699826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.700034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.700045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.700120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.700273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.700283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.700433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.700603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.700613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.700695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.700832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.700842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.700992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.701130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.701140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.701371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.701457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.701469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.701541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.701689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.701700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.701840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.701942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.701953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.702038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.702172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.702182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.702389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.702548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.702559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.702628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.702697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.702707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.702882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.702958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.702969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.703204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.703346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.703357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.703440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.703677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.703688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.703831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.704077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.704088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.704235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.704426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.704438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.704672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.704835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.704846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.705058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.705211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.705223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.705303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.705525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.705537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.705618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.705702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.705713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.705870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.706027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.706037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.706209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.706290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.706300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.706508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.706589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.706599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.706764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.706918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.706929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.707140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.707422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.707434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.707519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.707658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.707669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.707829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.708042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.708052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.708299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.708397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.708407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.708485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.708574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.708586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.708728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.708872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.708884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.861 qpair failed and we were unable to recover it. 00:32:10.861 [2024-07-12 17:42:49.708985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.861 [2024-07-12 17:42:49.709140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.709151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.709296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.709369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.709380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.709552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.709695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.709708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.709978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.710186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.710197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.710292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.710473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.710484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.710580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.710661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.710672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.710939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.711106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.711117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.711354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.711463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.711474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.711628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.711825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.711836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.711977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.712051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.712062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.712146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.712310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.712322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.712486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.712636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.712648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.712801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.712910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.712921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.712996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.713132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.713143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.713247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.713354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.713365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.713464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.713645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.713656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.713748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.713915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.713927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.714077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.714240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.714251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.714403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.714489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.714500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.714660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.714872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.714883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.715060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.715140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.715150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.715307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.715392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.715402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.715541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.715623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.715632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.715806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.715874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.715884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.715957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.716036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.716047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.716142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.716379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.716391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.862 qpair failed and we were unable to recover it. 00:32:10.862 [2024-07-12 17:42:49.716544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.716689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.862 [2024-07-12 17:42:49.716700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.716845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.717009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.717019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.717096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.717183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.717194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.717359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.717447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.717458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.717667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.717805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.717816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.717998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.718072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.718082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.718347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.718441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.718452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.718547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.718690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.718701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.718841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.718931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.718940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.719079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.719186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.719197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.719294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.719427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.719438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.719600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.719679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.719689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.719782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.719871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.719883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.719950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.720105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.720115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.720308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.720383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.720393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.720542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.720751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.720763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.720974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.721063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.721074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.721275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.721369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.721380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.721588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.721673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.721683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.721911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.722053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.722065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.722230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.722410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.722421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.722568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.722737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.722747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.722935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.723089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.723100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.723174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.723263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.723273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.723493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.723627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.723637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.723718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.723943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.723953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.724022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.724171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.724180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.724425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.724530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.724540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.724693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.724899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.863 [2024-07-12 17:42:49.724910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.863 qpair failed and we were unable to recover it. 00:32:10.863 [2024-07-12 17:42:49.724995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.725203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.725214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.725295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.725442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.725452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.725604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.725675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.725685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.725770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.725948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.725958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.726061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.726227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.726238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.726348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.726455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.726465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.726632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.726703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.726712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.726793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.726968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.726979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.727049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.727184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.727195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.727283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.727497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.727509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.727670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.727755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.727766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.727866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.727963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.727973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.728115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.728379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.728390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.728478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.728571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.728582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.728672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.728908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.728919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.729006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.729213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.729223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.729370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.729510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.729520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.729618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.729769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.729791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.729932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.730099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.730109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.730274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.730432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.730453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.730542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.730680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.730690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.730939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.731077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.731087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.731183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.731249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.731275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.731438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.731579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.731590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.731678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.731749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.731759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.731931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.732126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.732137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.732277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.732369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.732379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.732614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.732754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.732765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.732882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.733018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.733029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.733167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.733252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.733266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.864 qpair failed and we were unable to recover it. 00:32:10.864 [2024-07-12 17:42:49.733339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.733545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.864 [2024-07-12 17:42:49.733554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.733708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.733873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.733883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.734057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.734215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.734225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.734433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.734571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.734584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.734723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.734948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.734959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.735128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.735215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.735227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.735399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.735493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.735504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.735730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.735806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.735817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.735968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.736053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.736065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.736232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.736295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.736305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.736392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.736532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.736542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.736644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.736749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.736760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.736907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.737153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.737322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.737580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.737824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.737919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.738007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.738163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.738173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.738320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.738387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.738397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.738535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.738618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.738629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.738767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.738856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.738868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.738959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.739035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.739045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.739192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.739290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.739301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.739513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.739607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.739619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.739833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.739902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.739913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.740075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.740233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.740243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.740458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.740603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.740613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.740696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.740783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.740793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.740942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.741080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.741091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.741188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.741266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.741276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.741418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.741525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.741535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.865 qpair failed and we were unable to recover it. 00:32:10.865 [2024-07-12 17:42:49.741684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.865 [2024-07-12 17:42:49.741768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.741779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.742021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.742122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.742132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.742273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.742439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.742451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.742517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.742585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.742594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.742668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.742805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.742816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.742968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.743049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.743058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.743200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.743345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.743355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.743516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.743670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.743680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.743779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.743864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.743874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.743956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.744024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.744034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.744304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.744443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.744454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.744539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.744620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.744630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.744786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.744856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.744866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.744949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.745098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.745109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.745316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.745403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.745412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.745554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.745637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.745647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.745792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.745861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.745871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.746017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.746097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.746106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.746197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.746461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.746472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.746681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.746827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.746837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.746971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.747206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.747352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.747539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.747699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.747851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.747920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.748076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.748086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.748238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.748483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.748493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.748651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.748897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.748908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.749048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.749198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.749208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.749365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.749443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.749453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.749610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.749764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.866 [2024-07-12 17:42:49.749774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.866 qpair failed and we were unable to recover it. 00:32:10.866 [2024-07-12 17:42:49.749930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.750029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.750039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.750195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.750276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.750286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.750428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.750499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.750509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.750668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.750764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.750774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.750864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.751042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.751052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.751132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.751269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.751279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.751426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.751583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.751593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.751743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.751823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.751833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.751991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.752074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.752084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.752237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.752396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.752406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.752553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.752737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.752748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.752890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.753138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.753373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.753549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.753812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.753960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.754060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.754229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.754383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.754539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.754856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.754934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.755150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.755316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.755327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.755405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.755468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.755478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.867 [2024-07-12 17:42:49.755615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.755761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.867 [2024-07-12 17:42:49.755772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.867 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.755837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.755995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.756005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.756096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.756179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.756189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.756280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.756447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.756457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.756664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.756749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.756761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.756976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.757116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.757127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.757276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.757418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.757428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.757517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.757598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.757608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.757844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.757991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.758073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.758300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.758546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.758839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.758933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.759039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.759247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.759261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.759418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.759488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.759499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.759564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.759634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.759643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.759933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.760010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.760020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.760165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.760259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.760269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.760490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.760563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.760574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.760736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.760818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.760830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.760983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.761232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.761394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.761637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.761890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.761979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.762059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.762213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.762223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.762456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.762526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.762537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.762696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.762865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.762875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.763139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.763402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.763412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.763554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.763650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.763660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.763760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.763903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.763914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.868 qpair failed and we were unable to recover it. 00:32:10.868 [2024-07-12 17:42:49.764052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.764240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.868 [2024-07-12 17:42:49.764251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.764407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.764511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.764523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.764600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.764678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.764689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.764835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.764931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.764942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.765091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.765166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.765176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.765336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.765478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.765489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.765590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.765663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.765674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.765890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.766096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.766108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.766191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.766333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.766344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.766443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.766618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.766629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.766748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.766970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.766980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.767120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.767355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.767365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.767503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.767710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.767720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.767868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.768008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.768018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.768080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.768134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.768143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.768385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.768464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.768474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.768692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.768848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.768858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.768997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.769153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.769163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.769394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.769550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.769559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.769657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.769739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.769747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.769885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.770102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.770110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.770176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.770265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.770274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.770445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.770532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.770540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.770610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.770743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.770751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.770894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.771040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.771048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.771201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.771286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.771295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.771482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.771699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.771708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.771781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.771861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.771869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.772118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.772259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.869 [2024-07-12 17:42:49.772269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.869 qpair failed and we were unable to recover it. 00:32:10.869 [2024-07-12 17:42:49.772355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.772457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.772466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.772613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.772680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.772689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.772783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.772848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.772857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.773009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.773157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.773166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.773306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.773445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.773455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.773593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.773678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.773688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.773839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.774088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.774395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.774569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.774803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.774963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.775117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.775180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.775189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.775287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.775375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.775385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.775471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.775540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.775550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.775704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.775835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.775845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.776001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.776087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.776097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.776182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.776339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.776349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.776497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.776567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.776577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.776661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.776835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.776846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.777004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.777095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.777108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.777173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.777268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.777278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.777376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.777479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.777490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.777646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.777806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.777816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.777958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.778098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.778110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.778185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.778402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.778414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.778548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.778631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.778642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.778802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.778886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.778897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.779035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.779203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.779214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.779376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.779518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.779529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.779721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.779817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.779831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.870 [2024-07-12 17:42:49.779933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.780078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.870 [2024-07-12 17:42:49.780087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.870 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.780164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.780262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.780273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.780361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.780538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.780549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.780707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.780793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.780804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.780892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.781032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.781042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.781190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.781286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.781297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.781381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.781463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.781474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.781549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.781758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.781769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.781976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.782197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.782447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.782612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.782760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.782927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.783073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.783227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.783238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.783430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.783567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.783578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.783739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.783951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.783961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.784050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.784148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.784159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:10.871 [2024-07-12 17:42:49.784339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.784532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.871 [2024-07-12 17:42:49.784543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:10.871 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.784770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.784845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.784856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.784939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.785136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.785302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.785486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.785765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.785927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.786076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.786158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.786168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.786252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.786340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.786351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.786559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.786634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.786645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.786807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.786877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.786887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.787045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.787237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.787457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.787674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.787924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.787996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.788006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.788222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.788301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.788312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.147 qpair failed and we were unable to recover it. 00:32:11.147 [2024-07-12 17:42:49.788388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.788479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.147 [2024-07-12 17:42:49.788490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.788586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.788670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.788681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.788764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.788938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.788950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.789025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.789090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.789102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.789264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.789346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.789357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.789455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.789554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.789564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.789663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.789890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.789901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.789967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.790054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.790064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.790199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.790338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.790349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.790508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.790581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.790592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.790745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.790916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.790926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.791081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.791182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.791192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.791364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.791507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.791517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.791656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.791838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.791849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.791931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.792094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.792105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.792261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.792409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.792419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.792496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.792579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.792591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.792839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.793046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.793056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.793276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.793424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.793434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.793513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.793603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.793614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.793698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.793840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.793851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.794095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.794166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.794177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.794268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.794406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.794418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.794505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.794583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.794593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.794669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.794835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.794845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.795028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.795191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.795201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.795293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.795447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.795457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.795536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.795629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.795640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.795808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.795893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.795903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.796055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.796121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.796131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.796339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.796594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.796605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.148 qpair failed and we were unable to recover it. 00:32:11.148 [2024-07-12 17:42:49.796757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.148 [2024-07-12 17:42:49.796845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.796855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.796943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.797186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.797344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.797576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.797820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.797973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.798064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.798129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.798140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.798240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.798313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.798323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.798466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.798541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.798551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.798633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.798864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.798876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.799046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.799116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.799126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.799223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.799385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.799396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.799603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.799762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.799773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.799924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.800063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.800074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.800221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.800358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.800369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.800451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.800591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.800601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.800672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.800753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.800763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.800923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.801171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.801409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.801622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.801815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.801906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.802045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.802120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.802130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.802221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.802358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.802369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.802532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.802679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.802689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.802806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.802974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.802985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.803067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.803141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.803152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.803252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.803428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.803450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.803621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.803690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.803700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.803860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.804028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.804039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.804136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.804349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.149 [2024-07-12 17:42:49.804359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.149 qpair failed and we were unable to recover it. 00:32:11.149 [2024-07-12 17:42:49.804523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.804672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.804683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.804764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.804834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.804845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.804982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.805129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.805139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.805231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.805325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.805335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.805480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.805621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.805631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.805723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.805893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.805904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.806048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.806128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.806140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.806379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.806552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.806562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.806755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.806854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.806865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.806938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.807078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.807089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.807181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.807261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.807271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.807421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.807673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.807683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.807757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.807830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.807841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.807905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.808110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.808122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.808280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.808364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.808375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.808567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.808647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.808658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.808742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.808884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.808895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.808981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.809072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.809083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.809237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.809333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.809345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.809430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.809490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.809500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.809652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.809824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.809835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.810059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.810208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.810219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.810375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.810525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.810535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.810710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.810782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.810792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.810867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.811041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.811053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.811197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.811284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.811296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.811393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.811560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.811571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.811722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.811866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.811878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.812120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.812212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.812222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.812318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.812480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.812491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.150 qpair failed and we were unable to recover it. 00:32:11.150 [2024-07-12 17:42:49.812634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.150 [2024-07-12 17:42:49.812777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.812788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.812943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.813119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.813131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.813222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.813430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.813442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.813608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.813696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.813706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.813865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.814063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.814074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.814236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.814396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.814407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.814489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.814634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.814645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.814741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.814825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.814835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.814993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.815189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.815386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.815677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.815852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.815937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.816014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.816099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.816109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.816276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.816351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.816364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.816525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.816675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.816686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.816753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.816819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.816829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.816947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.817196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.817412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.817580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.817755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.817932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.818092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.818240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.818251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.818451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.818550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.818561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.818659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.818803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.818816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.151 qpair failed and we were unable to recover it. 00:32:11.151 [2024-07-12 17:42:49.818902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.151 [2024-07-12 17:42:49.818975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.818986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.819115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.819323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.819334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.819543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.819627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.819638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.819715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.819851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.819862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.819965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.820051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.820062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.820131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.820213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.820225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.820436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.820643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.820654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.820750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.820822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.820833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.820971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.821179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.821189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.821325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.821501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.821513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.821652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.821795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.821805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.821945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.822082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.822093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.822229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.822446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.822457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.822539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.822623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.822633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.822895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.823105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.823115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.823339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.823423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.823434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.823502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.823578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.823589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.823747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.823832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.823842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.823991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.824078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.824089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.824184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.824249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.824265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.824404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.824544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.824554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.824717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.824785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.824795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.824865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.825122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.825383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.825617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.825843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.825999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.826009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.826157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.826236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.826246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.826418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb38690 is same with the state(5) to be set 00:32:11.152 [2024-07-12 17:42:49.826584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.826705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.826725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.826908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.827150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.827169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.152 qpair failed and we were unable to recover it. 00:32:11.152 [2024-07-12 17:42:49.827269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.827384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.152 [2024-07-12 17:42:49.827400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.827504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.827676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.827691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.827776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.828031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.828046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.828200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.828290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.828306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.828551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.828635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.828651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.828802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.828961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.828976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.829121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.829222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.829237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.829400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.829506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.829521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.829683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.829770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.829785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.829967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.830212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.830227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.830336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.830488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.830503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.830649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.830739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.830754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.830851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.831068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.831083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.831261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.831447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.831462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.831566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.831806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.831821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.832079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.832184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.832199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.832284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.832381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.832397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.832481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.832632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.832647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.832825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.832914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.832929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.833150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.833265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.833284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.833446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.833533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.833548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.833629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.833849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.833864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.834030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.834154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.834169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.834324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.834536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.834552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.834722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.834812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.834826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.834987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.835207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.835222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.835376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.835462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.835477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.835587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.835765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.835780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.835937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.836085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.836100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.836262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.836427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.836445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.153 qpair failed and we were unable to recover it. 00:32:11.153 [2024-07-12 17:42:49.836598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.836676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.153 [2024-07-12 17:42:49.836691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.836867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.837045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.837060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.837135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.837279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.837295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.837414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.837563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.837578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.837849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.837920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.837935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.838027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.838191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.838206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.838381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.838597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.838612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.838704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.838798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.838813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.839004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.839089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.839104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.839204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.839362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.839384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.839533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.839643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.839657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.839812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.839908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.839923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.840116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.840194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.840210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.840425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.840592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.840607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.840759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.840927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.840941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.841041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.841127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.841141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.841231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.841389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.841404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.841496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.841656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.841671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.841807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.841962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.841977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.842077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.842245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.842268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.842363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.842544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.842559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.842660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.842839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.842854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.842949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.843033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.843048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.843245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.843411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.843427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.843644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.843747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.843762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.844020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.844186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.844201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.844293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.844407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.844422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.844512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.844677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.844693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.844795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.845005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.845020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.845187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.845448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.845464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.845619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.845782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.845797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.154 qpair failed and we were unable to recover it. 00:32:11.154 [2024-07-12 17:42:49.846015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.154 [2024-07-12 17:42:49.846171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.846186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.846351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.846499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.846514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.846661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.846828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.846843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.847007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.847224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.847239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.847323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.847490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.847505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.847584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.847663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.847677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.847839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.848034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.848049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.848207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.848354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.848370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.848547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.848702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.848717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.848939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.849094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.849109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.849332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.849429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.849444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.849597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.849843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.849858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.850105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.850195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.850210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.850327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.850498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.850514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.850683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.850788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.850803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.850901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.851048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.851064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.851218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.851323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.851339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.851427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.851582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.851597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.851744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.851911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.851926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.852173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.852394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.852410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.852498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.852606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.852622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.852770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.852859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.852874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.853030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.853125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.853140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.853225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.853311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.853327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.853406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.853503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.853518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.853700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.853855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.853871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.854022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.854175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.854190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.854351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.854521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.854536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.854755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.854976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.854992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.855174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.855279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.155 [2024-07-12 17:42:49.855294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.155 qpair failed and we were unable to recover it. 00:32:11.155 [2024-07-12 17:42:49.855459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.855561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.855576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.855688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.855855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.855870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.856025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.856188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.856204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.856314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.856407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.856422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.856589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.856699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.856714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.856827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.856974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.856989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.857097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.857247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.857267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.857413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.857561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.857576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.857658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.857748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.857764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.857853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.857999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.858014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.858192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.858273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.858289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.858397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.858594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.858609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.858757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.858884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.858899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.859056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.859223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.859266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.859456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.859660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.859690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.859898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.859989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.860004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.860164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.860324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.860339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.860558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.860716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.860731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.860882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.861032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.861047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.861144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.861239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.861258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.861419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.861636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.861651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.861818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.861902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.861917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.862009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.862106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.862121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.862283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.862450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.862466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.862628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.862803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.862818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.862969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.863183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.863198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.863314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.863541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.863556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.156 qpair failed and we were unable to recover it. 00:32:11.156 [2024-07-12 17:42:49.863653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.863861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.156 [2024-07-12 17:42:49.863890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.864081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.864298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.864331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.864568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.864866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.864901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.865054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.865232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.865276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.865462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.865642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.865672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.865926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.866122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.866153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.866364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.866561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.866591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.866842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.866963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.866992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.867115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.867265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.867295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.867501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.867674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.867705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.867911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.868034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.868063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.868366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.868550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.868580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.868798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.869068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.869099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.869229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.869419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.869449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.869713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.869932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.869961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.870162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.870361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.870392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.870668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.870919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.870950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.871146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.871324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.871354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.871620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.871758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.871787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.871988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.872284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.872316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.872581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.872770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.872800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.873015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.873227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.873268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.873403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.873592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.873627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.873781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.873990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.874020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.874204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.874323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.874355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.874564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.874789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.874819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.875019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.875216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.875245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.875570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.875821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.875851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.876127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.876303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.876334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.876520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.876721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.876751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.876969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.877089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.877118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-07-12 17:42:49.877321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.877519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.157 [2024-07-12 17:42:49.877548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.877770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.878035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.878065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.878267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.878506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.878536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.878812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.878925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.878954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.879203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.879479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.879509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.879754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.879882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.879912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.880130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.880318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.880349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.880539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.880810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.880840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.881035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.881157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.881188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.881390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.881599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.881629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.881820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.882015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.882045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.882336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.882539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.882569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.882856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.883074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.883105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.883313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.883504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.883535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.883797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.884014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.884044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.884296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.884543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.884573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.884794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.884970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.884999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.885127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.885241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.885279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.885554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.885803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.885833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.885981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.886175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.886203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.886409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.886589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.886619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.886738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.886985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.887014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.887300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.887485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.887514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.887795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.887973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.888002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.888193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.888379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.888413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.888548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.888730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.888759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.888951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.889265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.889296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.889495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.889713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.889744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.890016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.890211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.890241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.890383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.890498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.890527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.890670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.890950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.890979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-07-12 17:42:49.891248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.891435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.158 [2024-07-12 17:42:49.891465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.891671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.891816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.891846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.892041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.892149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.892178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2abe0 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.892391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.892725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.892762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.892916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.893039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.893070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.893275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.893459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.893491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.893744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.893914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.893929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.894039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.894265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.894296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.894479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.894694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.894725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.894851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.895125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.895155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.895340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.895470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.895500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.895619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.895864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.895917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.896204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.896422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.896453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.896664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.896971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.897001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.897279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.897463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.897494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.897617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.897859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.897889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.898024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.898200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.898230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.898449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.898671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.898704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.898822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.898976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.899007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.899187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.899436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.899467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.899764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.899954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.899984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.900252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.900486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.900522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.900663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.900838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.900869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.901121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.901299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.901331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.901537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.901716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.901746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.901941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.902120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.902150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.902428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.902608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.902637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.902845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.902974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.903003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.903196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.903485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.903516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.903645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.903824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.903855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.904039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.904211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.159 [2024-07-12 17:42:49.904240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-07-12 17:42:49.904507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.904727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.904772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.160 qpair failed and we were unable to recover it. 00:32:11.160 [2024-07-12 17:42:49.905011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.905234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.905249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.160 qpair failed and we were unable to recover it. 00:32:11.160 [2024-07-12 17:42:49.905422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.905661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.905691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.160 qpair failed and we were unable to recover it. 00:32:11.160 [2024-07-12 17:42:49.905814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.905952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.905983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.160 qpair failed and we were unable to recover it. 00:32:11.160 [2024-07-12 17:42:49.906274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.906495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.906525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.160 qpair failed and we were unable to recover it. 00:32:11.160 [2024-07-12 17:42:49.906723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.906903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.906933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.160 qpair failed and we were unable to recover it. 00:32:11.160 [2024-07-12 17:42:49.907194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.907371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.907403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.160 qpair failed and we were unable to recover it. 00:32:11.160 [2024-07-12 17:42:49.907584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.907706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.160 [2024-07-12 17:42:49.907736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.160 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.907990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.908235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.908288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.908598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.908776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.908806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.909132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.909399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.909436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.909718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.909962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.909977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.910153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.910388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.910419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.910698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.910930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.910945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.911106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.911267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.911283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.911384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.911637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.911652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.911830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.911967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.911997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.912328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.912544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.912574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.912781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.913025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.913055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.913265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.913484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.913515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.913767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.914007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.914022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.914153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.914332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.914363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.914549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.914850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.914890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.915111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.915394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.915410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.915599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.915724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.915754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.915946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.916145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.916176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.916375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.916598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.916628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.916756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.916921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.916955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.917157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.917385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.917417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.917703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.917906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.917921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.918086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.918280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.918311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.918539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.918716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.918746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.918878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.919057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.919073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.919248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.919455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.919485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.919767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.919885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.919900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.920002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.920179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.920209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.920503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.920763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.920778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.161 qpair failed and we were unable to recover it. 00:32:11.161 [2024-07-12 17:42:49.920927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.161 [2024-07-12 17:42:49.921014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.921030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.921124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.921290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.921306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.921397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.921562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.921592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.921872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.922058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.922089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.922282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.922497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.922528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.922821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.923029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.923059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.923265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.923457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.923487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.923702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.923949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.923979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.924197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.924389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.924420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.924707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.924814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.924844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.924970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.925220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.925250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.925458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.925582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.925613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.925794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.925916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.925947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.926089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.926364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.926395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.926653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.926782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.926812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.926992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.927253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.927305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.927552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.927802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.927832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.928034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.928310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.928341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.928523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.928655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.928686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.928901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.929046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.929061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.929237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.929458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.929473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.929671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.929837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.929868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.930055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.930309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.930341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.930597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.930795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.930826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.931041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.931222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.931253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.931482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.931615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.931630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.931873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.932046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.932061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.932251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.932424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.932455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.932585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.932776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.932806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.932993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.933087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.933102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.933263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.933452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.933482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.162 qpair failed and we were unable to recover it. 00:32:11.162 [2024-07-12 17:42:49.933678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.162 [2024-07-12 17:42:49.933873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.933904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.934011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.934236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.934277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.934533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.934722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.934752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.934941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.935131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.935162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.935416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.935543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.935574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.935823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.936091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.936122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.936333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.936548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.936580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.936758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.936862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.936879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.937042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.937194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.937224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.937488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.937735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.937765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.937882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.938056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.938071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.938179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.938462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.938493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.938687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.938870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.938885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.939033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.939265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.939296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.939498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.939613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.939644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.939758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.939956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.939986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.940301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.940552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.940582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.940753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.941006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.941037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.941290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.941567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.941597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.941807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.941926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.941956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.942164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.942383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.942414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.942605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.942829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.942859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.942986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.943160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.943190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.943399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.943596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.943627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.943761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.943899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.943929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.944112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.944314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.944346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.944539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.944731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.944761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.944979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.945116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.945146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.945349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.945537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.945567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.945754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.945972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.946002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.163 qpair failed and we were unable to recover it. 00:32:11.163 [2024-07-12 17:42:49.946124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.163 [2024-07-12 17:42:49.946248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.946286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.946498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.946610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.946640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.946765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.947022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.947053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.947237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.947527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.947558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.947688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.947878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.947909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.948111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.948291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.948322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.948508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.948643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.948673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.948956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.949084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.949114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.949339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.949535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.949565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.949683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.949793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.949823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.949999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.950214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.950244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.950450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.950669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.950699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.950862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.951041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.951070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.951346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.951491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.951521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.951800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.951979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.952009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.952207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.952387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.952418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.952541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.952730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.952760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.952892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.952997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.953013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.953094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.953223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.953266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.953524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.953700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.953730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.953924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.954158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.954173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.954275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.954431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.954472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.954653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.954863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.954893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.955067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.955166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.955181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.955346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.955582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.955612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.955829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.955992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.956008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.956176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.956347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.956363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.956471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.956619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.956634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.956791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.956967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.956997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.957180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.957318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.957350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.164 [2024-07-12 17:42:49.957478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.957721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.164 [2024-07-12 17:42:49.957736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.164 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.957896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.958049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.958079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.958220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.958340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.958371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.958572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.958695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.958713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.958823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.959070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.959100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.959330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.959593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.959623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.959902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.960101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.960132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.960387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.960514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.960545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.960685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.960869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.960900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.961111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.961196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.961211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.961371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.961544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.961574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.961793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.961982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.962012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.962301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.962520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.962535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.962690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.962837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.962855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.963119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.963273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.963288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.963456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.963633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.963663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.963790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.964035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.964066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.964186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.964383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.964415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.964539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.964738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.964779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.964865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.965115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.965145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.965346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.965551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.965582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.965719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.966006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.966036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.966169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.966364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.966380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.966632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.966905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.966941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.967101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.967279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.967295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.967463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.967686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.967716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.967925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.968173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.968204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.165 qpair failed and we were unable to recover it. 00:32:11.165 [2024-07-12 17:42:49.968429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.165 [2024-07-12 17:42:49.968618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.968648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.968840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.969024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.969054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.969251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.969392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.969422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.969615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.969803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.969833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.969959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.970142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.970173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.970310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.970530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.970560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.970770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.971070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.971104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.971306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.971502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.971532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.971747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.971880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.971911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.972139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.972248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.972280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.972479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.972759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.972789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.973097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.973293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.973324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.973591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.973728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.973743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.973909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.974093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.974124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.974336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.974551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.974582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.974714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.974868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.974883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.974994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.975159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.975174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.975274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.975375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.975390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.975638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.975849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.975880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.976076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.976199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.976229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.976439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.976634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.976665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.976851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.977090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.977120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.977311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.977561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.977592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.977775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.977999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.978028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.978282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.978419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.978434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.978658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.978850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.978866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.979097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.979325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.979358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.979566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.979768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.979798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.980063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.980297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.980328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.980511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.980642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.980673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.166 [2024-07-12 17:42:49.980865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.981062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.166 [2024-07-12 17:42:49.981093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.166 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.981370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.981565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.981596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.981779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.981875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.981890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.982057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.982164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.982199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.982329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.982444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.982475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.982730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.982850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.982880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.983070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.983216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.983249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.983532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.983804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.983835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.984094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.984319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.984351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.984499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.984696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.984727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.984855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.985045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.985075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.985270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.985471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.985502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.985710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.985914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.985945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.986270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.986486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.986517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.986707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.986834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.986850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.987121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.987390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.987422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.987620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.987892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.987922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.988183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.988333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.988349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.988606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.988805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.988835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.989042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.989340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.989372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.989605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.989886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.989917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.990134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.990397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.990429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.990653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.990839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.990869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.991124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.991399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.991430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.991648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.991873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.991903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.992105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.992299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.992332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.992547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.992821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.992861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.993030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.993153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.993184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.993387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.993517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.993547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.993756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.993970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.994001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.994272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.994542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.994573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.167 qpair failed and we were unable to recover it. 00:32:11.167 [2024-07-12 17:42:49.994847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.167 [2024-07-12 17:42:49.994981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.995012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.995224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.995518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.995551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.995846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.995975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.996005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.996194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.996468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.996501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.996643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.996801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.996816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.997069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.997239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.997260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.997452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.997640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.997671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.997875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.997990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.998020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.998149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.998261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.998276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.998500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.998779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.998810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.999088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.999309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.999341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:49.999557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.999854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:49.999886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.000137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.000316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.000348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.000568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.000772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.000787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.000978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.001203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.001219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.001409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.001636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.001652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.001922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.002144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.002160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.002383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.002608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.002623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.002738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.002902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.002918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.003071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.003230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.003246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.003412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.003583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.003599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.003773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.003926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.003941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.004130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.004398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.004414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.004584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.004809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.004824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.005043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.005249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.005271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.005441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.005637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.005653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.005835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.005939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.005955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.006039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.006262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.006278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.006468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.006669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.006685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.006878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.007000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.007016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.168 qpair failed and we were unable to recover it. 00:32:11.168 [2024-07-12 17:42:50.007168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.168 [2024-07-12 17:42:50.007329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.007345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.007623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.007716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.007732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.007969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.008123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.008138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.008238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.008470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.008487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.008654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.008835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.008850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.009031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.009281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.009297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.009508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.009701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.009716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.009803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.009957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.009972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.010070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.010342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.010357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.010525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.010735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.010750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.010906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.011170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.011185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.011446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.011706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.011722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.011897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.012048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.012064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.012321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.012408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.012424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.012619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.012849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.012864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.013074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.013319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.013336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.013504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.013733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.013748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.014021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.014273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.014289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.014441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.014607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.014623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.014776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.014970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.014986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.015158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.015320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.015336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.015557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.015711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.015726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.015926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.016097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.016113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.016388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.016592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.016607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.016794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.016954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.016970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.017140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.017293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.017310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.169 [2024-07-12 17:42:50.017498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.017672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.169 [2024-07-12 17:42:50.017688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.169 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.017966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.018133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.018148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.018320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.018587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.018603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.018700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.018846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.018861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.019105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.019412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.019428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.019664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.019902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.019918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.020088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.020264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.020280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.020436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.020605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.020620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.020846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.021061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.021077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.021300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.021599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.021615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.021864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.022112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.022127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.022307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.022461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.022478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.022647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.022814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.022831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.022989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.023174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.023222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.023477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.023690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.023708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.023909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.024095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.024122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.024345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.024511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.024546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.024699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.025031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.025092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.025393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.025621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.025636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.025906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.026010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.026026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.026212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.026407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.026426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.026610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.026718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.026733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.026958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.027119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.027134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.027347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.027643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.027659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.027905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.028119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.028135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.028313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.028510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.028526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.028704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.028925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.028941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.029196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.029361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.029377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.029602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.029771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.029787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.030032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.030153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.030169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.170 [2024-07-12 17:42:50.030416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.030671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.170 [2024-07-12 17:42:50.030690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.170 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.030911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.031184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.031200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.031450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.031673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.031690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.031915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.032084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.032100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.032354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.032525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.032541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.032651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.032898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.032915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.033093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.033213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.033229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.033425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.033646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.033662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.033867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.034111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.034127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.034374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.034461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.034477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.034717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.034963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.034981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.035190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.035349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.035365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.035614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.035782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.035797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.035990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.036246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.036267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.036538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.036784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.036799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.037046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.037299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.037315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.037432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.037684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.037700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.037850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.038019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.038034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.038315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.038603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.038618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.038786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.038933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.038949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.039108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.039268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.039287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.039449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.039646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.039661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.039881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.040129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.040145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.040426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.040649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.040664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.040835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.041081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.041097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.041319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.041508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.041524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.041772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.041961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.041977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.042291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.042485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.042501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.042671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.042841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.042857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.043006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.043225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.043240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.043509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.043679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.043694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.171 qpair failed and we were unable to recover it. 00:32:11.171 [2024-07-12 17:42:50.043949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.171 [2024-07-12 17:42:50.044166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.044182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.044278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.044496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.044511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.044607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.044753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.044768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.044949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.045108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.045123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.045414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.045653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.045669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.045830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.045982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.045997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.046163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.046390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.046407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.046497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.046744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.046760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.046977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.047258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.047275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.047371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.047545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.047560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.047729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.048004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.048020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.048170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.048439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.048455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.048548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.048783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.048798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.048998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.049148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.049163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.049273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.049434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.049450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.049598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.049845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.049860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.050036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.050281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.050298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.050390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.050539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.050554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.050707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.050800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.050816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.050914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.051074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.051091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.051205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.051390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.051406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.051571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.051791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.051807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.051908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.052138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.052154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.052248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.052416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.052431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.052512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.052596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.052612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.052689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.052852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.052868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.052962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.053122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.053137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.053286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.053507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.053522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.053799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.053984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.054000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.054091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.054192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.054207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.172 qpair failed and we were unable to recover it. 00:32:11.172 [2024-07-12 17:42:50.054374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.172 [2024-07-12 17:42:50.054453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.054468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.054548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.054763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.054779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.054860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.055053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.055069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.055218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.055439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.055455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.055546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.055700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.055716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.055911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.056022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.056037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.056203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.056351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.056367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.056612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.056688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.056703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.056854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.057010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.057025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.057174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.057304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.057321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.057505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.057597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.057612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.057765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.057983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.057999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.058218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.058446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.058462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.058549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.058636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.058650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.058747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.058911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.058926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.059088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.059353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.059369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.059522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.059690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.059705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.059826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.059998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.060013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.060103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.060249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.060270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.060437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.060529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.060545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.060816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.061005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.061021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.061121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.061288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.061304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.061494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.061671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.061687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.061840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.061987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.062002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.062148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.062391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.062408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.062510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.062755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.062770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.173 qpair failed and we were unable to recover it. 00:32:11.173 [2024-07-12 17:42:50.063043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.063213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.173 [2024-07-12 17:42:50.063228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.063497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.063758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.063774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.064017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.064275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.064291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.064456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.064638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.064654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.064922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.065800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.065817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.066014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.066264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.066280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.066432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.066589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.066604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.066839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.067073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.067088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.067314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.067568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.067584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.067749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.067970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.067985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.068270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.068511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.068527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.068764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.069004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.069019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.069271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.069546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.069562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.069732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.069934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.069949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.070187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.070365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.070381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.070572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.070794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.070809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.071079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.071191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.071206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.071365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.071541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.071557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.071718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.071909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.071924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.072015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.072180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.072195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.072288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.072435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.072451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.072702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.072866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.072881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.072978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.073054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.073069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.073239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.073489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.073505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.073689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.073947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.073963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.074183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.074442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.074458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.074712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.074932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.074947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.075194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.075467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.075483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.075713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.075952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.075967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.076143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.076312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.076328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.076506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.076755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.174 [2024-07-12 17:42:50.076770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.174 qpair failed and we were unable to recover it. 00:32:11.174 [2024-07-12 17:42:50.076863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.077123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.077138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.077290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.077537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.077552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.077772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.077865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.077880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.077963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.078233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.078249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.078448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.078638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.078654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.078919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.079077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.079092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.079313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.079461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.079477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.079638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.079884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.079899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.080063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.080275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.080291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.080452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.080670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.080685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.080836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.081044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.081059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.081296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.081571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.081586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.081836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.081986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.082001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.082227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.082410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.082426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.082706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.082867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.082883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.083125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.083347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.083363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.083545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.083723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.083738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.083837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.084095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.084110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.084332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.084616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.084631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.084869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.085107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.085123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.085290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.085513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.085528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.085679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.085793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.085809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.085969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.086202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.086217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.086389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.086551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.086567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.086663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.086877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.086893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.087132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.087386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.087402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.087599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.087841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.087857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.088110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.088327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.088344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.088620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.088869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.088884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.089035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.089293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.089309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.175 [2024-07-12 17:42:50.089532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.089762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.175 [2024-07-12 17:42:50.089777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.175 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.090015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.090195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.090211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.090399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.090648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.090663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.090813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.091037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.091056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.091289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.091457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.091472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.091573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.091788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.091804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.091962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.092187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.092202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.092462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.092694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.092709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.092857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.093025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.093040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.093206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.093455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.093471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.093656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.093760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.093776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.094019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.094295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.094311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.094480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.094661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.094676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.094898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.095075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.095093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.095269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.095366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.095381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.095651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.095891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.095907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.096123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.096274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.096290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.096537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.096766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.096781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.097049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.097325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.097341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.097640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.097911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.097927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.176 [2024-07-12 17:42:50.098168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.098327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.176 [2024-07-12 17:42:50.098343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.176 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.098515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.098682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.098698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.098993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.099165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.099181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.099295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.099397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.099415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.099593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.099844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.099859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.100034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.100235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.100263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.100457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.100637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.100684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.100915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.101227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.101280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.101534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.101729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.101764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.101890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.102067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.102087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.102379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.102670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.102686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.102911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.103064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.103079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.103266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.103380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.103395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.103586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.103830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.103850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.104020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.104270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.104287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.104437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.104589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.104604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.104846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.105006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.105021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.105185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.105334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.105350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.105507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.105687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.105701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.105802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.105920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.105935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.106180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.106264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.106280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.106550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.106806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.106821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.446 qpair failed and we were unable to recover it. 00:32:11.446 [2024-07-12 17:42:50.106982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.446 [2024-07-12 17:42:50.107131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.107146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.107296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.107461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.107477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.107647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.107815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.107831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.108021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.108293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.108309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.108541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.108712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.108728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.108822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.109017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.109032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.109200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.109391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.109407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.109623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.109840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.109855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.110031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.110247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.110267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.110543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.110795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.110811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.110978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.111136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.111151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.111348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.111589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.111604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.111866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.112106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.112136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.112398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.112656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.112687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.112993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.113287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.113318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.113445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.113719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.113749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.113951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.114231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.114270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.114480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.114731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.114761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.115017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.115297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.115329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.115535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.115812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.115843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.116033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.116284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.116324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.116494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.116743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.116773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.117061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.117315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.117347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.117632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.117910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.117940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.118121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.118411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.118427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.118675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.118823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.118839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.119025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.119252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.119311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.119596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.119886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.119916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.120225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.120520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.120551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.120739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.121029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.121059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.447 [2024-07-12 17:42:50.121366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.121634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.447 [2024-07-12 17:42:50.121664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.447 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.121855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.122160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.122201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.122444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.122692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.122708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.122962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.123134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.123150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.123359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.123479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.123509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.123712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.123994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.124024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.124240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.124500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.124531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.124729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.125029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.125060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.125360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.125632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.125663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.125813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.125991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.126021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.126277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.126462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.126492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.126748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.127074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.127105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.127415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.127616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.127646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.127832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.128079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.128109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.128292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.128506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.128535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.128837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.129148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.129178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.129378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.129555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.129571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.129741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.129848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.129878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.130162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.130511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.130543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.130738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.131003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.131034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.131264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.131542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.131572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.131832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.132108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.132138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.132341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.132613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.132644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.132926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.133104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.133134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.133313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.133507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.133537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.133866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.134142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.134172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.134484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.134579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.134593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.134842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.135026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.135056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.135357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.135615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.135646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.135902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.136158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.136188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.136490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.136637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.448 [2024-07-12 17:42:50.136653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.448 qpair failed and we were unable to recover it. 00:32:11.448 [2024-07-12 17:42:50.136924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.137170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.137185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.137419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.137696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.137726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.137911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.138202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.138232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.138552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.138736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.138766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.139047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.139359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.139391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.139589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.139839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.139869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.140152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.140400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.140431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.140716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.140965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.140995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.141199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.141408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.141424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.141579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.141778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.141809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.142009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.142280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.142311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.142519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.142794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.142824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.143113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.143374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.143406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.143540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.143786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.143815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.144122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.144327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.144376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.144608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.144856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.144887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.145086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.145288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.145304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.145467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.145580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.145595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.145699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.145867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.145882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.146049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.146157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.146187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.146450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.146667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.146682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.146794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.147014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.147044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.147305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.147506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.147521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.147799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.147979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.148009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.148288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.148416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.148447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.148650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.148927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.148958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.149242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.149501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.149517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.149692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.149856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.149888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.150187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.150371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.150387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.150545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.150792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.150807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.449 [2024-07-12 17:42:50.150926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.151193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.449 [2024-07-12 17:42:50.151223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.449 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.151457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.151715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.151731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.151899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.152162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.152192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.152469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.152747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.152778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.152997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.153247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.153268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.153380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.153534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.153577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.153835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.154041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.154072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.154358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.154553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.154569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.154793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.154908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.154938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.155135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.155412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.155444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.155753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.155998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.156028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.156309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.156619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.156650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.156867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.157167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.157197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.157484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.157629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.157658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.157942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.158277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.158309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.158584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.158896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.158927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.159131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.159310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.159326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.159549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.159771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.159786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.159977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.160231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.160273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.160482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.160747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.160779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.161070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.161247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.161289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.161520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.161719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.161757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.162041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.162275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.162306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.162584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.162782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.162813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.163069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.163279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.163296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.450 qpair failed and we were unable to recover it. 00:32:11.450 [2024-07-12 17:42:50.163546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.163764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.450 [2024-07-12 17:42:50.163779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.163879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.164055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.164087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.164357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.164554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.164584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.164892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.165139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.165169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.165447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.165648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.165679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.165950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.166177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.166210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.166368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.166526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.166546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.166798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.167064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.167080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.167291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.167442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.167458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.167634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.167893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.167924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.168178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.168388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.168404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.168664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.168806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.168836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.168974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.169175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.169207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.169472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.169733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.169764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.170044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.170249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.170288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.170482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.170779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.170810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.170992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.171275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.171313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.171572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.171803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.171833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.172031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.172311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.172344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.172528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.172686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.172717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.172860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.173057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.173089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.173238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.173451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.173484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.173622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.173874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.173904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.174161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.174427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.174458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.174698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.174820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.174851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.175048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.175299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.175331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.175449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.175718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.175755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.176060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.176361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.176394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.176508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.176778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.176794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.177027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.177209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.177239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.451 [2024-07-12 17:42:50.177490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.177673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.451 [2024-07-12 17:42:50.177702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.451 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.177837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.178040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.178070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.178267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.178461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.178492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.178792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.178968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.178983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.179152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.179421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.179453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.179638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.179819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.179850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.180064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.180360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.180395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.180701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.180837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.180868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.181149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.181387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.181419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.181562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.181696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.181726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.181924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.182177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.182207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.182399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.182569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.182585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.182821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.183077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.183107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.183365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.183570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.183586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.183840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.184094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.184125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.184431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.184642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.184672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.184911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.185155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.185186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.185388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.185615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.185645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.185853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.186148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.186178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.186380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.186602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.186632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.186819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.187138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.187169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.187376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.187646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.187676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.187883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.188081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.188111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.188372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.188529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.188544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.188727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.188815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.188830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.189075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.189232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.189272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.189407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.189690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.189720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.190023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.190204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.190234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.190456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.190677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.190693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.190931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.191218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.191249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.191564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.191792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.191823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.452 qpair failed and we were unable to recover it. 00:32:11.452 [2024-07-12 17:42:50.192050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.192286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.452 [2024-07-12 17:42:50.192303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.192523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.192773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.192789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.192989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.193240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.193281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.193596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.193921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.193951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.194237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.194429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.194461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.194701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.195025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.195056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.195371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.195561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.195601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.195721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.195891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.195920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.196229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.196523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.196555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.196749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.196946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.196976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.197177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.197453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.197486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.197669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.197832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.197862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.198200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.198476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.198508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.198794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.198975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.199005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.199289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.199471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.199501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.199790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.200082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.200114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.200431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.200722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.200755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.201089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.201392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.201425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.201561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.201818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.201849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.202079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.202209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.202240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.202538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.202713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.202744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.202881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.203021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.203051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.203316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.203451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.203480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.203737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.204006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.204036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.204318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.204568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.204599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.204857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.205151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.205181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.205474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.205653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.205668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.205846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.206119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.206149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.206366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.206651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.206666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.206841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.207017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.207047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.207292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.207521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.207551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.453 qpair failed and we were unable to recover it. 00:32:11.453 [2024-07-12 17:42:50.207694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.453 [2024-07-12 17:42:50.207894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.207910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.208073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.208274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.208305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.208451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.208657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.208688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.208960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.209250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.209294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.209553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.209789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.209819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.210103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.210326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.210358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.210589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.210733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.210764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.211087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.211223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.211253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.211551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.211754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.211784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.211975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.212250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.212293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.212576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.212761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.212792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.213081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.213368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.213399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.213601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.213793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.213823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.213972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.214249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.214292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.214551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.214683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.214713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.214980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.215173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.215204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.215451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.215678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.215708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.215870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.216132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.216171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.216334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.216421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.216437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.216526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.216776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.216807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.217011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.217220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.217251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.217499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.217782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.217813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.218040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.218176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.218223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.218341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.218494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.218510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.218733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.218902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.218918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.219139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.219268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.219285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.454 [2024-07-12 17:42:50.219452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.219734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.454 [2024-07-12 17:42:50.219766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.454 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.219953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.220096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.220128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.220277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.220478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.220508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.220750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.221026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.221056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.221192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.221386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.221426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.221701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.221864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.221879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.222163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.222354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.222386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.222515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.222772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.222803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.223114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.223400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.223432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.223711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.224001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.224031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.224227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.224426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.224458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.224670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.224866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.224897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.225185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.225363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.225379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.225553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.225736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.225767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.226007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.226336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.226367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.226508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.226701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.226732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.226959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.227273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.227304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.227503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.227724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.227755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.228039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.228222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.228252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.228481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.228614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.228662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.228807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.228969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.228999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.229220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.229539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.229571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.229832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.230089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.230121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.230251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.230539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.230554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.230730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.230895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.230927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.231067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.231369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.231402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.231596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.231816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.231846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.232080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.232283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.232314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.232589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.232818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.232848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.233121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.233409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.233440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.233693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.233951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.233981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.455 qpair failed and we were unable to recover it. 00:32:11.455 [2024-07-12 17:42:50.234275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.234408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.455 [2024-07-12 17:42:50.234439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.234684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.234909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.234925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.235024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.235222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.235239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.235484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.235713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.235744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.235966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.236283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.236315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.236648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.236924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.236940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.237169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.237369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.237401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.237539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.237854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.237886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.238176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.238393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.238431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.238663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.238868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.238900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.239216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.239515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.239546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.239739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.240047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.240078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.240352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.240558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.240589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.240822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.240995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.241026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.241226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.241459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.241490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.241686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.241935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.241965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.242226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.242461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.242493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.242752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.243036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.243066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.243274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.243555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.243591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.243713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.243985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.244016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.244343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.244639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.244671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.244869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.245062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.245093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.245295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.245510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.245540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.245846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.246047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.246077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.246215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.246514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.246547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.246839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.247092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.247123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.247411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.247718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.247749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.248007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.248302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.248334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.248655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.248979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.249018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.249316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.249611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.249642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.249776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.249941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.249957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.456 qpair failed and we were unable to recover it. 00:32:11.456 [2024-07-12 17:42:50.250143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.456 [2024-07-12 17:42:50.250320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.250353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.250618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.250825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.250856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.251070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.251305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.251337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.251578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.251758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.251774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.252026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.252313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.252346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.252618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.252840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.252872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.253141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.253415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.253432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.253600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.253784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.253824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.254060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.254366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.254398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.254538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.254820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.254837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.255065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.255326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.255343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.255515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.255726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.255756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.255960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.256177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.256208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.256439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.256647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.256677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.256939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.257040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.257057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.257314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.257521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.257537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.257720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.257947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.257962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.258137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.258486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.258519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.258826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.259029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.259046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.259249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.259499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.259516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.259799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.260069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.260085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.260345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.260521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.260552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.260855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.261136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.261167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.261359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.261625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.261657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.261913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.262159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.262175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.262339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.262511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.262542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.262751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.263031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.263062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.263215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.263448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.263480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.263707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.263997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.264029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.264299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.264574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.264605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.264804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.265107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.457 [2024-07-12 17:42:50.265138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.457 qpair failed and we were unable to recover it. 00:32:11.457 [2024-07-12 17:42:50.265416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.265630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.265666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.265836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.266073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.266104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.266262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.266444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.266460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.266716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.266944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.266974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.267186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.267386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.267418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.267712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.267950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.267981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.268278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.268507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.268539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.268758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.268941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.268957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.269195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.269460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.269493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.269726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.269904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.269936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.270205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.270448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.270481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.270771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.271023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.271054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.271245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.271455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.271471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.271731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.271920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.271950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.272073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.272275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.272308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.272565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.272756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.272788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.273086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.273373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.273390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.273566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.273875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.273906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.274165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.274368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.274400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.274692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.274862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.274879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.275154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.275475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.275507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.275693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.275877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.275894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.276132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.276326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.276359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.276639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.276878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.276909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.277191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.277379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.277411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.277608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.277946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.277977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.278200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.278473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.278505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.278793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.279119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.279151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.279376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.279582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.279614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.279826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.280021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.280052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.458 [2024-07-12 17:42:50.280330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.280548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.458 [2024-07-12 17:42:50.280580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.458 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.280866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.281149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.281180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.281378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.281592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.281623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.281914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.282174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.282217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.282486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.282759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.282790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.283122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.283395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.283427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.283628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.283891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.283922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.284139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.284431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.284463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.284730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.285079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.285110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.285415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.285639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.285670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.285947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.286236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.286291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.286598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.286888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.286920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.287143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.287337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.287369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.287654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.287943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.287959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.288134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.288447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.288479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.288748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.288974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.289005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.289157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.289447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.289480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.289810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.290099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.290130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.290345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.290622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.290654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.290842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.291104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.291136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.291401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.291684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.291716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.291993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.292262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.292296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.292595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.292711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.292742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.293034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.293296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.293329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.293625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.293938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.293970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.294214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.294444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.294476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.294743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.294948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.294979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.459 qpair failed and we were unable to recover it. 00:32:11.459 [2024-07-12 17:42:50.295194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.459 [2024-07-12 17:42:50.295407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.295424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.295662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.295990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.296021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.296341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.296635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.296666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.296983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.297281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.297314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.297549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.297811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.297843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.298060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.298354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.298387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.298537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.298802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.298834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.299103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.299381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.299414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.299733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.300018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.300034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.300287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.300514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.300546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.300748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.300927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.300959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.301162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.301401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.301433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.301701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.301993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.302025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.302234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.302509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.302542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.302865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.303166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.303198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.303415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.303677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.303693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.303899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.304218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.304249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.304414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.304626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.304658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.304922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.305170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.305187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.305473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.305719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.305763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.305958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.306164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.306196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.306507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.306775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.306806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.307082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.307333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.307369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.307687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.307882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.307913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.308182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.308392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.308424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.308690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.308880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.308897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.309072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.309357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.309390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.309711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.309900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.309932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.310142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.310433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.310466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.310600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.310865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.310897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.311196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.311449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.460 [2024-07-12 17:42:50.311481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.460 qpair failed and we were unable to recover it. 00:32:11.460 [2024-07-12 17:42:50.311611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.311901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.311932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.312233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.312546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.312578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.312886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.313126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.313157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.313461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.313762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.313793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.314095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.314402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.314435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.314733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.315002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.315019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.315316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.315521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.315553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.315872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.316069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.316099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.316321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.316612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.316643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.316952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.317182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.317214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.317526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.317729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.317760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.317998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.318277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.318310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.318507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.318699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.318730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.318949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.319188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.319218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.319524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.319737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.319768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.320041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.320304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.320336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.320603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.320873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.320904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.321236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.321514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.321547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.321742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.321961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.321993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.322291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.322603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.322640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.322865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.323125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.323156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.323444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.323714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.323746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.323960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.324160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.324191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.324482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.324674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.324706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.325005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.325212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.325243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.325527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.325847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.325878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.326195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.326457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.326490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.326814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.327037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.327069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.327273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.327585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.327616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.327913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.328154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.328191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.461 qpair failed and we were unable to recover it. 00:32:11.461 [2024-07-12 17:42:50.328461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.328668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.461 [2024-07-12 17:42:50.328684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.328849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.329033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.329065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.329270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.329545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.329577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.329875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.330180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.330211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.330530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.330689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.330706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.330877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.331063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.331094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.331390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.331533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.331575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.331753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.331879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.331910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.332144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.332463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.332496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.332728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.333017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.333058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.333269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.333560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.333591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.333874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.334190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.334221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.334557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.334774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.334806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.335014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.335349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.335383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.335665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.335850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.335867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.336042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.336229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.336269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.336498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.336687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.336717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.336952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.337197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.337229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.337542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.337760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.337792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.338059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.338344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.338382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.338621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.338911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.338927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.339132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.339422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.339440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.339646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.339886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.339917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.340195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.340397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.340429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.340650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.340876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.340908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.341182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.341405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.341437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.341650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.341789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.341819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.342016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.342289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.342322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.342586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.342879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.342910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.343045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.343276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.343309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.343504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.343729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.343760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.462 qpair failed and we were unable to recover it. 00:32:11.462 [2024-07-12 17:42:50.344099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.462 [2024-07-12 17:42:50.344358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.344391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.344623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.344882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.344913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.345109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.345407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.345440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.345652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.345909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.345940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.346241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.346562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.346593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.346880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.347117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.347148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.347364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.347627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.347657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.347926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.348172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.348204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.348512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.348808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.348839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.349121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.349320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.349353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.349635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.349843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.349874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.350111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.350308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.350340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.350647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.350917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.350948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.351215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.351500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.351532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.351848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.352146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.352178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.352404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.352614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.352646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.352865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.353144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.353176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.353497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.353727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.353758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.354052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.354242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.354283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.354570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.354860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.354891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.355208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.355511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.355544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.355781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.356049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.356080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.356362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.356597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.356627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.356916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.357104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.357120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.357359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.357653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.357684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.357903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.358188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.358223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.463 [2024-07-12 17:42:50.358464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.358744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.463 [2024-07-12 17:42:50.358776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.463 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.359047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.359339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.359371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.359585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.359708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.359725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.359996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.360316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.360348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.360558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.360748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.360779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.361106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.361424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.361457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.361729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.361971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.362003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.362292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.362615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.362646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.362920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.363188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.363219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.363443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.363560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.363591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.363795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.364083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.364099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.364338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.364571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.364588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.364771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.364947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.364978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.365181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.365464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.365497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.365728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.365871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.365903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.366040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.366330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.366349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.366561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.366829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.366860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.367162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.367466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.367498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.367800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.368105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.368136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.368429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.368714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.368745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.368900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.369170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.369201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.369507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.369830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.369862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.370186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.370479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.370512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.370757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.370961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.370978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.371237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.371503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.371545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.371755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.372042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.372074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.372277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.372491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.372522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.372817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.373135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.373167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.373384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.373689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.373720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.373918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.374148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.374178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.374463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.374792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.374823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.464 qpair failed and we were unable to recover it. 00:32:11.464 [2024-07-12 17:42:50.375056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.464 [2024-07-12 17:42:50.375221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.375252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.375533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.375853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.375884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.376196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.376492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.376525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.376806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.377131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.377162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.377472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.377763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.377795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.378113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.378342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.378376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.378672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.378990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.379022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.379315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.379521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.379552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.379771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.380087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.380119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.380443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.380665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.380696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.380889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.381150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.381181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.381396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.381692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.381724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.381920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.382187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.382219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.382524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.382741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.382772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.383032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.383236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.383278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.383476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.383620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.383651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.383892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.384179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.384209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.384529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.384824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.384855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.385135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.385405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.385438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.385633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.385832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.385864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.386160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.386418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.386452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.386738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.386994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.387024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.387313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.387426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.387443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.387706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.387903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.387935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.388148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.388411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.388443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.388641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.388901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.388933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.389135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.389320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.389337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.389441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.389708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.389738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.389936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.390148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.390178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.390479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.390744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.390776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.390974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.391233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.465 [2024-07-12 17:42:50.391275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.465 qpair failed and we were unable to recover it. 00:32:11.465 [2024-07-12 17:42:50.391602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.391881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.391913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.392184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.392515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.392548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.392832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.393043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.393075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.393284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.393573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.393605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.393746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.393977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.394009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.394293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.394520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.394552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.394756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.395020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.395052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.395205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.395552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.395584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.395852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.396066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.396082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.396270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.396442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.396472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.396685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.396895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.396927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.397192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.397369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.397386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.397581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.397727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.397757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.398010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.398278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.398310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.398625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.398950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.398981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.399224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.399424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.399456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.399733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.399931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.399947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.400215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.400439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.400472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.400690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.400982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.401013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.401161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.401380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.401398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.401575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.401833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.401864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.402068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.402329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.402367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.402634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.402923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.466 [2024-07-12 17:42:50.402954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.466 qpair failed and we were unable to recover it. 00:32:11.466 [2024-07-12 17:42:50.403346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.403635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.403669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.403920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.404210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.404241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.404550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.404691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.404722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.404941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.405154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.405185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.405484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.405776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.405807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.406014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.406275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.406307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.406509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.406803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.406834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.407121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.407438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.407471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.407775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.408071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.408107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.408425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.408746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.408777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.408921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.409182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.409213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.409364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.409658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.409689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.409980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.410294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.410326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.410594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.410870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.410901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.411219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.411448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.411481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.411747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.412005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.412021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.412284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.412494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.412511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.412796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.413007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.413039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.736 [2024-07-12 17:42:50.413274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.413594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.736 [2024-07-12 17:42:50.413637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.736 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.413868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.414147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.414178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.414443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.414740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.414771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.415079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.415211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.415228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.415513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.415806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.415838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.416047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.416248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.416290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.416560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.416828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.416859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.417153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.417442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.417474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.417709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.418002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.418034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.418309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.418550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.418582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.418905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.419166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.419203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.419536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.419799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.419831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.420115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.420318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.420350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.420543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.420745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.420775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.421001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.421290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.421323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.421589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.421872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.421903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.422139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.422329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.422362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.422658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.422946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.422977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.423274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.423580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.423611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.423919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.424192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.424208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.424446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.424609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.424640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.424914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.425144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.425176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.425390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.425659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.425690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.425912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.426199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.426231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.426549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.426841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.426872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.427092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.427356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.427390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.427585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.427856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.427887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.428215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.428513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.428545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.737 qpair failed and we were unable to recover it. 00:32:11.737 [2024-07-12 17:42:50.428868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.429166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.737 [2024-07-12 17:42:50.429196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.429504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.429815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.429847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.430113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.430300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.430333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.430571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.430831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.430862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.431157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.431420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.431454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.431775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.432071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.432103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.432301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.432577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.432608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.432823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.433083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.433113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.433402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.433600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.433630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.433959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.434252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.434293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.434609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.434877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.434908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.435184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.435400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.435433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.435632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.435913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.435943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.436163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.436387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.436419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.436688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.437024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.437055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.437277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.437489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.437520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.437764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.437990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.438007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.438291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.438592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.438624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.438921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.439165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.439196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.439568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.439692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.439723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.439962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.440173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.440203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.440453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.440659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.440690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.440967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.441271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.441304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.441530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.441824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.441855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.442175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.442370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.442403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.442670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.442959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.442990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.443271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.443540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.443571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.443862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.444175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.444206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.444515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.444784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.444827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.738 qpair failed and we were unable to recover it. 00:32:11.738 [2024-07-12 17:42:50.444989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.738 [2024-07-12 17:42:50.445264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.445297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.445566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.445839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.445870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.446075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.446277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.446295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.446509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.446718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.446749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.447058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.447279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.447312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.447608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.447811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.447842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.448043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.448253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.448307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.448576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.448719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.448750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.449047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.449263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.449280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.449512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.449699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.449730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.449935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.450224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.450266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.450551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.450863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.450895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.451198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.451469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.451502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.451724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.451954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.451986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.452217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.452465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.452497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.452802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.453115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.453146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.453363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.453562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.453593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.453801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.454085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.454100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.454340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.454503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.454520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.454773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.454971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.455001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.455245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.455463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.455480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.455658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.455829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.455845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.456033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.456234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.456274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.456573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.456842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.456874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.457077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.457396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.457429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.457676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.457950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.457992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.458133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.458392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.458424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.458696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.458976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.459007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.459278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.459521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.459551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.739 qpair failed and we were unable to recover it. 00:32:11.739 [2024-07-12 17:42:50.459747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.739 [2024-07-12 17:42:50.459960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.459992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.460207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.460401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.460434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.460639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.460828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.460858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.461123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.461435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.461469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.461796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.462103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.462120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.462407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.462610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.462641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.462783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.462995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.463026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.463277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.463590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.463628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.463772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.464034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.464065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.464387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.464685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.464716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.464929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.465231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.465287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.465481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.465684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.465715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.465914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.466202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.466233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.466538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.466685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.466716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.466984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.467164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.467195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.467502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.467796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.467828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.468131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.468408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.468425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.468692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.468970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.469001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.469218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.469527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.469560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.469863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.470135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.470166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.470355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.470602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.470634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.470924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.471149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.471180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.471419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.471628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.471659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.740 qpair failed and we were unable to recover it. 00:32:11.740 [2024-07-12 17:42:50.471928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.472268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.740 [2024-07-12 17:42:50.472301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.472526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.472740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.472772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.472932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.473156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.473187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.473385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.473698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.473730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.473949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.474199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.474231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.474546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.474814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.474844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.475167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.475398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.475431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.475649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.475915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.475947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.476233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.476380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.476411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.476702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.476956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.476987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.477280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.477460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.477477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.477720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.478038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.478070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.478291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.478560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.478591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.478860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.479165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.479197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.479440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.479734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.479765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.480081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.480275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.480308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.480497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.480746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.480778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.481100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.481391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.481423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.481624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.481843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.481874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.482096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.482420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.482453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.482733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.482937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.482953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.483123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.483325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.483342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.483528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.483708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.483740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.484044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.484336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.484368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.484686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.484895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.484926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.741 qpair failed and we were unable to recover it. 00:32:11.741 [2024-07-12 17:42:50.485126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.741 [2024-07-12 17:42:50.485370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.485388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.485692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.485934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.485967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.486191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.486321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.486354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.486555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.486745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.486776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.486991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.487206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.487237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.487535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.487806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.487837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.488101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.488341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.488373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.488654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.488916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.488954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.489189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.489363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.489380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.489538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.489703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.489720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.489970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.490193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.490209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.490305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.490601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.490632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.490768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.491025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.491057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.491333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.491525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.491557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.491757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.492047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.492079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.492290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.492474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.492505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.492838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.493129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.493160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.493372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.493555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.493609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.493761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.493970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.494001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.494288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.494610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.494642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.494935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.495225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.495267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.495574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.495870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.495901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.496208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.496474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.496506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.496778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.496897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.496928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.497218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.497447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.497464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.497709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.497967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.497998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.498305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.498519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.498552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.742 qpair failed and we were unable to recover it. 00:32:11.742 [2024-07-12 17:42:50.498856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.742 [2024-07-12 17:42:50.499088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.499126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.499392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.499560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.499591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.499869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.500009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.500025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.500119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.500397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.500430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.500757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.501034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.501066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.501393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.501582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.501614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.501886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.502097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.502129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.502343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.502575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.502606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.502840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.503088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.503119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.503317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.503618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.503650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.503938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.504084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.504120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.504385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.504561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.504592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.504796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.505003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.505035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.505336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.505516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.505533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.505767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.505971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.505988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.506226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.506509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.506541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.506808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.507094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.507126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.507444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.507710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.507741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.507940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.508208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.508224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.508425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.508661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.508677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.508845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.509127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.509158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.509378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.509629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.509660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.509950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.510291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.510325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.510489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.510760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.510791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.511117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.511362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.511395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.743 [2024-07-12 17:42:50.511675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.511784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.743 [2024-07-12 17:42:50.511801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.743 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.512067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.512221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.512252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.512530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.512821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.512853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.512987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.513114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.513146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.513439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.513678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.513695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.513928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.514031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.514047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.514236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.514421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.514452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.514590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.514865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.514896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.515120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.515311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.515344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.515585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.515865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.515896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.516218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.516431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.516463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.516617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.516857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.516873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.517113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.517410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.517442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.517639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.517838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.517870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.518166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.518323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.518340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.518545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.518817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.518849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.519069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.519349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.519366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.519533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.519642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.519658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.519949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.520228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.520267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.520483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.520780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.520811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.521104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.521326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.521368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.521603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.521826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.521843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.521931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.522027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.522045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.522296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.522529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.522561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.522787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.523047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.523093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.523203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.523485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.523517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.523848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.524050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.524081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.524377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.524495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.524526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.524822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.525097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.525129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.744 qpair failed and we were unable to recover it. 00:32:11.744 [2024-07-12 17:42:50.525406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.525726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.744 [2024-07-12 17:42:50.525757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.526085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.526278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.526310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.526610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.526845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.526876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.527134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.527304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.527322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.527612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.527820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.527851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.528139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.528392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.528429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.528753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.529066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.529098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.529396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.529708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.529739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.530048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.530275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.530293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.530582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.530795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.530827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.531122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.531393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.531433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.531667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.531947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.531978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.532240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.532388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.532420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.532616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.532887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.532918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.533143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.533350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.533383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.533589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.533885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.533917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.534239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.534538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.534570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.534844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.535135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.535166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.535489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.535783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.535814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.536081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.536414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.536446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.536652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.536946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.536977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.745 qpair failed and we were unable to recover it. 00:32:11.745 [2024-07-12 17:42:50.537274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.537555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.745 [2024-07-12 17:42:50.537586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.537885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.538205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.538235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.538485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.538770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.538801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.539047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.539238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.539281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.539475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.539714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.539746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.540039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.540358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.540390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.540618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.540916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.540948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.541272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.541495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.541526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.541830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.542126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.542157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.542463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.542679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.542710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.542906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.543108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.543139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.543351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.543527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.543557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.543877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.544161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.544193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.544510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.544651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.544682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.544876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.545161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.545192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.545400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.545615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.545646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.545941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.546235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.546276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.546559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.546870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.546902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.547207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.547448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.547480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.547676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.547973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.548005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.548307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.548510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.548527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.548711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.548974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.549005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.549223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.549502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.549535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.549798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.550054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.550085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.550289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.550552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.550584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.550788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.551079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.551118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.551280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.551525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.551557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.551793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.552031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.746 [2024-07-12 17:42:50.552062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.746 qpair failed and we were unable to recover it. 00:32:11.746 [2024-07-12 17:42:50.552332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.552615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.552645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.552857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.553146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.553177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.553455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.553550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.553568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.553736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.553987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.554019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.554348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.554645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.554676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.554994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.555295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.555328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.555566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.555797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.555828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.556053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.556273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.556305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.556625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.556915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.556947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.557224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.557554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.557586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.557872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.558186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.558218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.558544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.558839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.558871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.559210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.559509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.559543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.559764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.559972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.560004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.560279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.560570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.560601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.560795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.561108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.561140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.561296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.561448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.561479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.561774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.561915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.561946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.562139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.562428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.562461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.562673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.562952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.562983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.563183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.563374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.563406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.563538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.563823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.563855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.564092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.564378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.564395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.564584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.564772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.564803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.565032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.565221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.565253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.747 [2024-07-12 17:42:50.565459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.565718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.747 [2024-07-12 17:42:50.565749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.747 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.566071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.566332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.566364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.566501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.566817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.566847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.567045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.567337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.567375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.567659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.567846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.567878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.568194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.568496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.568528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.568837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.569107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.569146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.569407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.569623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.569640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.569877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.570048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.570064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.570324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.570511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.570543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.570779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.571069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.571100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.571418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.571652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.571682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.571977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.572246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.572288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.572578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.572801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.572842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.573133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.573413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.573445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.573771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.573961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.573993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.574288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.574633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.574665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.574883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.575172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.575204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.575455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.575746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.575777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.576098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.576359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.576393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.576681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.576903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.576934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.577139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.577471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.577489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.577751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.577988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.578020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.578233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.578405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.578443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.578665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.578945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.578975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.579213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.579512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.579529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.579810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.580031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.580063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.748 qpair failed and we were unable to recover it. 00:32:11.748 [2024-07-12 17:42:50.580373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.748 [2024-07-12 17:42:50.580648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.580680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.580987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.581251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.581295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.581517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.581799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.581830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.582025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.582285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.582318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.582658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.582937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.582969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.583292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.583505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.583536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.583838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.584028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.584065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.584353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.584497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.584529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.584651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.584839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.584869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.585160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.585419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.585450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.585743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.585934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.585964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.586178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.586462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.586494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.586808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.587043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.587060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.587365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.587556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.587586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.587796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.588084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.588115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.588411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.588727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.588758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.589051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.589295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.589313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.589567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.589731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.589762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.589977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.590290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.590323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.590453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.590649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.590680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.590975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.591238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.591281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.591553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.591787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.591818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.592105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.592370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.592402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.592707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.593025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.593056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.593349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.593668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.593700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.593933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.594212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.594244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.594569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.594858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.594889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.595204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.595436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.595468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.595666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.595884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.595916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.596210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.596522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.596553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.749 [2024-07-12 17:42:50.596821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.597024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.749 [2024-07-12 17:42:50.597040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.749 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.597211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.597570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.597603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.597850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.598084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.598115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.598407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.598747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.598779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.599059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.599394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.599427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.599709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.599967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.599999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.600193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.600349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.600382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.600587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.600748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.600778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.601073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.601292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.601322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.601520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.601834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.601866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.602217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.602527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.602560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.602771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.602965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.602996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.603195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.603469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.603486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.603748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.603981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.604013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.604321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.604525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.604556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.604842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.605063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.605094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.605236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.605494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.605526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.605836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.606144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.606175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.606410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.606704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.606735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.607004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.607280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.607313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.607433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.607647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.607679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.607947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.608224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.608277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.608510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.608758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.608790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.609064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.609275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.609307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.609505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.609787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.609818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.610013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.610309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.610342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.750 qpair failed and we were unable to recover it. 00:32:11.750 [2024-07-12 17:42:50.610642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.750 [2024-07-12 17:42:50.610861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.610892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.611191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.611519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.611552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.611794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.611986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.612017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.612317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.612530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.612561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.612830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.613093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.613124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.613346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.613555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.613571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.613779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.614068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.614099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.614371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.614559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.614590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.614860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.615151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.615182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.615505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.615826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.615857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.616076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.616364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.616396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.616627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.616927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.616958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.617282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.617545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.617575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.617773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.617990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.618022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.618217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.618448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.618480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.618682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.618927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.618944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.619244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.619368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.619385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.619560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.619823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.619855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.619986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.620281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.620313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.620528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.620791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.620808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.621010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.621241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.621282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.621587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.621879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.621911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.622195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.622460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.622492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.622761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.622972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.623003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.623274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.623554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.623585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.623905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.624170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.624201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.624543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.624799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.624816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.625064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.625226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.625243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.625517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.625704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.625735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.626002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.626272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.626304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.751 qpair failed and we were unable to recover it. 00:32:11.751 [2024-07-12 17:42:50.626575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.626756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.751 [2024-07-12 17:42:50.626772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.626985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.627276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.627308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.627624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.627829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.627846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.628026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.628139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.628155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.628389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.628612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.628628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.628929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.629111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.629128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.629361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.629605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.629636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.629911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.630095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.630127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.630341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.630579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.630611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.630752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.631014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.631045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.631377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.631707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.631738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.631947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.632224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.632265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.632590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.632872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.632888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.633136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.633371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.633389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.633652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.633930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.633962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.634169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.634449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.634481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.634803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.635067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.635098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.635437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.635717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.635733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.635823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.636088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.636119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.636441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.636734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.636750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.636990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.637278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.637311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.637533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.637730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.637761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.637979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.638189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.638221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.638550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.638808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.638839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.639046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.639237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.639279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.639542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.639839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.639871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.640124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.640330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.640347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.640617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.640818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.640849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.641074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.641218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.641249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.641555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.641739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.641771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.642037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.642230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.752 [2024-07-12 17:42:50.642270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.752 qpair failed and we were unable to recover it. 00:32:11.752 [2024-07-12 17:42:50.642574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.642840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.642882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.643154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.643423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.643456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.643721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.643951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.643982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.644275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.644587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.644618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.644919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.645117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.645149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.645418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.645743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.645774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.646066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.646253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.646294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.646611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.646793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.646810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.646972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.647164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.647196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.647503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.647799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.647831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.648057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.648303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.648320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.648501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.648708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.648740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.649042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.649276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.649309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.649578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.649790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.649821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.650118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.650330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.650361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.650636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.650825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.650856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.651130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.651341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.651375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.651590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.651808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.651824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.652108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.652286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.652318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.652606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.652808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.652838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.653137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.653356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.653399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.653672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.653937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.653969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.654270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.654575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.654606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.654874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.655151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.655183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.655450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.655647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.655688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.655989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.656274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.656306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.656627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.656889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.656920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.657215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.657520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.657553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.657748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.658034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.658065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.658279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.658481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.658512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.753 qpair failed and we were unable to recover it. 00:32:11.753 [2024-07-12 17:42:50.658801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.753 [2024-07-12 17:42:50.659086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.659122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.659361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.659651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.659683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.659877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.660138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.660168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.660490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.660800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.660832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.661097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.661281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.661314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.661613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.661821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.661838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.662020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.662282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.662314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.662630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.662947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.662978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.663294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.663557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.663588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.663930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.664250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.664298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.664535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.664671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.664709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.665001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.665285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.665317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.665549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.665878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.665909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.666196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.666512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.666544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.666813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.667096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.667127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.667323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.667561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.667577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.667873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.668161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.668193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.668387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.668613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.668629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.668794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.668969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.668998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.669315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.669561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.669591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.669890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.670201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.670238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.670479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.670749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.670766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.671007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.671242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.671265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.671430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.671697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.671728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.672048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.672343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.672375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.672588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.672859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.672891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.673187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.673377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.673408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.754 qpair failed and we were unable to recover it. 00:32:11.754 [2024-07-12 17:42:50.673608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.673875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.754 [2024-07-12 17:42:50.673906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.674199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.674511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.674543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.674845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.675130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.675160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.675480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.675784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.675814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.676110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.676308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.676341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.676628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.676892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.676922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.677140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.677402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.677434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.677731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.677864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.677895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.678192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.678506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.678523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.678787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.679029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.679060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.679356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.679591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.679623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.679921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.680236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.680277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.680481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.680714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.680744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.681032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.681347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.681379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.681509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.681728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.681759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.682044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.682324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.682356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.682571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.682860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.682891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.683026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.683371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.683403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.683551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.683857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.683889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.684083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.684409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.684442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.684643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.684743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.684759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.684956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.685143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.685173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.685452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.685590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.685621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.685831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.686124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.686165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.686466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.686766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.686796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.687011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.687231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.687272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.687425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.687658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.687688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.688026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.688308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.688341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.688631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.688753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.688785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.755 [2024-07-12 17:42:50.688979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.689135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.755 [2024-07-12 17:42:50.689167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.755 qpair failed and we were unable to recover it. 00:32:11.756 [2024-07-12 17:42:50.689474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.689767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.689798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.756 qpair failed and we were unable to recover it. 00:32:11.756 [2024-07-12 17:42:50.690017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.690293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.690326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.756 qpair failed and we were unable to recover it. 00:32:11.756 [2024-07-12 17:42:50.690626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.690961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.690992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.756 qpair failed and we were unable to recover it. 00:32:11.756 [2024-07-12 17:42:50.691279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.691543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.691576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.756 qpair failed and we were unable to recover it. 00:32:11.756 [2024-07-12 17:42:50.691884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.692181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.692212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.756 qpair failed and we were unable to recover it. 00:32:11.756 [2024-07-12 17:42:50.692427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.692655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.692686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.756 qpair failed and we were unable to recover it. 00:32:11.756 [2024-07-12 17:42:50.692893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.693001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.693020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.756 qpair failed and we were unable to recover it. 00:32:11.756 [2024-07-12 17:42:50.693178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.693423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.756 [2024-07-12 17:42:50.693441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:11.756 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.693754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.694021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.694052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.694325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.694542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.694573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.694842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.695105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.695122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.695389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.695653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.695670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.695908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.696166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.696211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.696497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.696813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.696844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.697151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.697454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.697487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.697717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.697906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.697937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.698201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.698520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.698537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.698671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.698934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.698966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.699276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.699568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.699585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.699787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.699963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.699995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.700288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.700581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.700614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.700911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.701249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.701294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.701553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.701815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.701846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.702141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.702327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.702360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.702661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.702876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.702907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.703178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.703454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.703486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.703696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.703960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.703991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.704217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.704510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.704542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.027 qpair failed and we were unable to recover it. 00:32:12.027 [2024-07-12 17:42:50.704771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.027 [2024-07-12 17:42:50.704917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.704934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.705196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.705400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.705433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.705596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.705855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.705885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.706095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.706299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.706333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.706627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.706940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.706971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.707273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.707587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.707619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.707946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.708236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.708292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.708609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.708816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.708847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.709042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.709187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.709218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.709552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.709782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.709813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.710039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.710327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.710359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.710578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.710866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.710897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.711126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.711246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.711287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.711557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.711773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.711803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.712017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.712339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.712372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.712551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.712746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.712777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.712909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.713101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.713132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.713415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.713681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.713698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.713856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.714017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.714032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.714332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.714601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.714643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.714811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.715066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.715097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.715312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.715602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.715634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.715837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.715945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.715962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.716235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.716541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.716573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.716769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.717066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.717098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.717340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.717538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.717554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.717758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.717953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.717985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.718266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.718474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.718505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.718798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.719086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.719121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.719444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.719754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.719771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.720009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.720295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.720327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.720523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.720791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.720822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.721061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.721203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.721234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.721524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.721691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.721722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.721933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.722073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.722104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.722338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.722582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.722613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.722882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.723089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.723121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.723417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.723629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.723660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.723961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.724154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.724185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.724481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.724690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.724721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.724955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.725076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.725092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.725349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.725536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.725552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.725764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.725925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.725941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.726107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.726373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.726390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.028 qpair failed and we were unable to recover it. 00:32:12.028 [2024-07-12 17:42:50.726659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.726921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.028 [2024-07-12 17:42:50.726952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.727164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.727330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.727363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.727608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.727924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.727956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.728173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.728451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.728483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.728763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.728939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.728956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.729190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.729374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.729391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.729511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.729606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.729623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.729853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.729944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.729960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.730224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.730560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.730592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.730877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.730972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.730988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.731122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.731304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.731322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.731618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.731799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.731815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.731989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.732175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.732206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.732556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.732824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.732840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.732972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.733072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.733089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.733251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.733359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.733376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.733625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.733891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.733922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.734154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.734363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.734394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.734618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.734853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.734883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.735175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.735493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.735526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.735695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.735807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.735823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.736002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.736293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.736326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.736534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.736827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.736865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.737008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.737322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.737354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.737570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.737831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.737848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.738033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.738138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.738155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.738474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.738582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.738599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.738767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.738979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.739009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.739302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.739472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.739503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.739719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.740056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.740087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.740425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.740709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.740740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.741014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.741204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.741236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.741549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.741824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.741862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.742095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.742337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.742369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.742636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.742885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.742917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.743164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.743305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.743337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.743586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.743866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.743883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.744074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.744337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.744369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.744659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.744882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.744912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.745187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.745394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.745426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.745622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.745944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.745976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.746219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.746508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.746540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.746665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.746929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.746966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.747171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.747373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.747405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.747619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.747736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.747767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.747982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.748204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.748235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.748456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.748789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.748821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.749011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.749215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.749246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.749454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.749691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.749722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.750041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.750129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.750146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.750396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.750690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.750721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.029 [2024-07-12 17:42:50.750938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.751140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.029 [2024-07-12 17:42:50.751171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.029 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.751426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.751661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.751681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.751942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.752155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.752186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.752466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.752781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.752797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.753015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.753223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.753239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.753496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.753795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.753826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.753992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.754184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.754215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.754447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.754735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.754767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.754999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.755271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.755303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.755600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.755893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.755924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.756136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.756325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.756342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.756513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.756621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.756638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.756929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.757190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.757221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.757574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.757852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.757883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.758152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.758415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.758458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.758724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.758961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.758979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.759212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.759411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.759428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.759531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.759701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.759717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.759956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.760273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.760306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.760448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.760657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.760689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.760960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.761169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.761201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.761509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.761833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.761864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.762158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.762420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.762453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.762751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.762977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.763008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.763213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.763546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.763578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.763796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.764062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.764093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.764289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.764499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.764530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.764821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.765023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.765040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.765277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.765465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.765481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.765588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.765755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.765797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.765990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.766190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.766222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.766452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.766569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.766600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.766922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.767177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.767207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.767467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.767659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.767690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.767904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.768101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.768134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.768456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.768748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.768779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.768972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.769238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.769289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.769562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.769774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.769791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.769962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.770157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.770188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.770403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.770668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.770700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.770917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.771094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.771139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.771352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.771546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.771577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.771847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.772014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.772044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.772311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.772595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.772625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.772947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.773160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.030 [2024-07-12 17:42:50.773191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.030 qpair failed and we were unable to recover it. 00:32:12.030 [2024-07-12 17:42:50.773492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.773777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.773808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.774027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.774267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.774299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.774506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.774739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.774770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.774984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.775190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.775221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.775474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.775767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.775799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.775997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.776133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.776164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.776436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.776647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.776678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.776951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.777086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.777117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.777387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.777602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.777633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.777917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.778176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.778208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.778489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.778707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.778738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.778932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.779222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.779253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.779552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.779780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.779811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.780007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.780277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.780310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.780633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.780935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.780951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.781213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.781457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.781490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.781738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.782055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.782086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.782400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.782669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.782711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.782965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.783266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.783298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.783616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.783874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.783905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.784194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.784385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.784417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.784639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.784896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.784913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.785077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.785347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.785379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.785644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.785852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.785868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.786080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.786400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.786433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.786628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.786835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.786866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.787105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.787321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.787353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.787566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.787828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.787859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.788102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.788397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.788430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.788670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.788932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.788962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.789181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.789446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.789478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.789670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.789857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.789889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.790204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.790511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.790544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.790820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.791008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.791038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.791275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.791506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.791537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.791736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.792022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.792054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.792352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.792555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.792586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.792867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.793074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.793105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.793316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.793606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.793637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.793757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.794046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.794077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.794400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.794614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.794645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.794949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.795110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.795127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.795333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.795596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.795627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.795856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.796145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.796177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.796414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.796708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.796739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.796950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.797215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.797246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.797571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.797867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.797899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.798206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.798443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.798475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.798671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.798971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.799002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.799287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.799507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.799539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.799778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.799967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.031 [2024-07-12 17:42:50.799984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.031 qpair failed and we were unable to recover it. 00:32:12.031 [2024-07-12 17:42:50.800276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.800597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.800627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.800924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.801245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.801298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.801619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.801898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.801928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.802225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.802505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.802537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.802751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.803040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.803085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.803349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.803685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.803716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.804002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.804337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.804370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.804640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.804848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.804879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.805068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.805306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.805339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.805641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.805832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.805863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.806008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.806231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.806272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.806567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.806797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.806828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.807125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.807233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.807249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.807518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.807847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.807879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.808082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.808345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.808362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.808540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.808865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.808896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.809178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.809452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.809470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.809651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.809938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.809969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.810280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.810505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.810536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.810840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.811042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.811074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.811342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.811532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.811563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.811880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.812182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.812213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.812522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.812833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.812864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.813160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.813471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.813504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.813719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.813929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.813960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.814170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.814373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.814405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.814730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.815060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.815091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.815306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.815569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.815601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.815897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.816160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.816192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.816387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.816677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.816708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.817031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.817210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.817226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.817327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.817588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.817619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.817886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.818181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.818212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.818386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.818651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.818683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.819011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.819219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.819250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.819559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.819733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.819765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.820038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.820231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.820251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.820431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.820712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.820743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.820962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.821248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.821299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.821528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.821673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.821704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.821910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.822196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.822228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.822453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.822638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.822670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.822904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.823116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.823147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.823471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.823651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.823669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.823920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.824028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.824059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.824301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.824510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.824541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.032 qpair failed and we were unable to recover it. 00:32:12.032 [2024-07-12 17:42:50.824792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.825085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.032 [2024-07-12 17:42:50.825128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.825413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.825740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.825771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.825977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.826247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.826292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.826503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.826736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.826767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.826895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.827165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.827196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.827405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.827680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.827711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.828035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.828328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.828345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.828610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.828865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.828895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.829245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.829506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.829538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.829777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.830070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.830101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.830428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.830720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.830758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.831060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.831294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.831327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.831593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.831869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.831903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.832187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.832419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.832436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.832614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.832845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.832876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.833074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.833320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.833353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.833560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.833851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.833882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.834191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.834353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.834371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.834603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.834846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.834878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.835157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.835485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.835517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.835711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.836034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.836050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.836215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.836399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.836416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.836678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.836932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.836949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.837159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.837369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.837402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.837563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.837759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.837791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.838021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.838165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.838196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.838431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.838642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.838673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.838938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.839198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.839230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.839380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.839643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.839674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.839890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.840096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.840127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.840265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.840420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.840437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.840718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.840879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.840895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.841025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.841233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.841282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.841502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.841771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.841813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.842008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.842252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.842295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.842454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.842667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.842698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.842929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.843067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.843098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.843232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.843417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.843433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.843559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.843718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.843734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.843975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.844188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.844219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.844370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.844637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.844668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.844940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.845271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.845304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.845509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.845627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.845658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.845868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.846057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.846074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.846338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.846487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.846519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.846656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.846855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.846886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.847179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.847420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.847452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.847670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.847928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.847945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.848130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.848241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.848285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.848481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.848684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.848715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.848975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.849176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.849207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.033 qpair failed and we were unable to recover it. 00:32:12.033 [2024-07-12 17:42:50.849416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.849624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.033 [2024-07-12 17:42:50.849655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.849952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.850278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.850310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.850542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.850749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.850780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.851049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.851233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.851276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.851418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.851536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.851567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.851836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.852038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.852054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.852330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.852530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.852561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.852794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.853061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.853093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.853235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.853532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.853566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.853794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.854028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.854060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.854215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.854381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.854398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.854582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.854734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.854751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.854951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.855103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.855134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.855353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.855566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.855598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.855847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.856019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.856037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.856229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.856431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.856465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.856705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.856982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.857013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.857240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.857440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.857472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.857691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.857972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.858003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.858198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.858403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.858436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.858704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.858819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.858851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.859164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.859364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.859397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.859634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.859893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.859924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.860189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.860381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.860414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.860630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.860817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.860848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.861038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.861240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.861282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.861446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.861667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.861698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.861981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.862237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.862253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.862423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.862527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.862543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.862823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.862998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.863015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.863133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.863287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.863320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.863514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.863722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.863753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.863958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.864154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.864185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.864328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.864588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.864605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.864841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.865020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.865036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.865139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.865344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.865361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.865513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.865601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.865617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.865778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.865977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.865994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.866146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.866303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.866320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.866534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.866618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.866634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.866877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.866959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.866975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.867140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.867336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.867354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.867586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.867752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.867768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.867914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.868021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.868037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.868221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.868414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.868431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.868609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.868810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.868827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.869041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.869202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.869218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.869329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.869504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.869520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.869708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.869958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.869975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.870269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.870464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.870481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.870667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.870849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.870866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.034 [2024-07-12 17:42:50.871041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.871279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.034 [2024-07-12 17:42:50.871297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.034 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.871476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.871716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.871733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.871917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.872078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.872095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.872287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.872382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.872399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.872559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.872738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.872755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.872864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.873025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.873041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.873147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.873413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.873432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.873555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.873725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.873742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.873913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.874069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.874085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.874356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.874546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.874563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.874725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.874986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.875002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.875124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.875379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.875396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.875678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.875836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.875853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.875961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.876144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.876160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.876334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.876491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.876507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.876780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.876879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.876895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.877053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.877317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.877333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.877510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.877676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.877691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.877808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.877988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.878004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.878269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.878444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.878460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.878615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.878774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.878789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.878962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.879057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.879072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.879238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.879434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.879451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.879557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.879726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.879742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.879823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.879974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.879990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.880081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.880261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.880278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.880455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.880650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.880666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.880783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.880895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.880911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.881085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.881310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.881327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.881506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.881789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.881805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.882044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.882148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.882165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.882328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.882418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.882435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.882605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.882800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.882816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.882925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.883101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.883118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.883273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.883465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.883481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.883579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.883742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.883757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.884038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.884341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.884358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.884521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.884807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.884823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.885076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.885300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.885318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.885475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.885666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.885685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.885954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.886186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.886202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.886363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.886604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.886619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.886876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.887064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.887079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.887316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.887472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.887488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.887577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.887760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.887776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.887933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.888119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.888135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.888241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.888446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.035 [2024-07-12 17:42:50.888463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.035 qpair failed and we were unable to recover it. 00:32:12.035 [2024-07-12 17:42:50.888631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.888795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.888811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.888987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.889143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.889159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.889268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.889527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.889546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.889706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.889898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.889914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.890141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.890298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.890314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.890477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.890703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.890718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.890920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.891094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.891110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.891347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.891606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.891622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.891905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.892082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.892098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.892354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.892439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.892456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.892723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.892875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.892891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.893084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.893358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.893374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.893654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.893810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.893829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.893999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.894203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.894219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.894472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.894717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.894733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.894906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.895160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.895178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.895288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.895460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.895476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.895560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.895734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.895750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.895976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.896132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.896148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.896262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.896434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.896450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.896713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.896943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.896960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.897225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.897350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.897369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.897543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.897711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.897730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.897853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.898021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.898038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.898271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.898438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.898454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.898700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.898869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.898885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.899063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.899217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.899233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.899327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.899535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.899551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.899758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.900015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.900032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.900219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.900477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.900493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.900662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.900900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.900916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.901005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.901231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.901248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.901438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.901648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.901664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.901949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.902150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.902166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.902390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.902558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.902575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.902818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.902902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.902919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.903166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.903412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.903428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.903631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.903946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.903962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.904191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.904426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.904442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.904529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.904708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.904724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.904964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.905169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.905185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.905359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.905537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.905553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.905811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.906023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.906039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.906288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.906403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.906419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.906679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.906887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.906903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.907076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.907234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.907250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.907449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.907681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.907697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.907880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.908142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.908158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.908423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.908603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.908620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.908889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.909071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.036 [2024-07-12 17:42:50.909087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.036 qpair failed and we were unable to recover it. 00:32:12.036 [2024-07-12 17:42:50.909251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.909347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.909363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.909608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.909787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.909803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.909978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.910217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.910233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.910521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.910720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.910736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.910824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.910981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.910997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.911176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.911355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.911372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.911478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.911767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.911782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.911974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.912203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.912220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.912330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.912487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.912504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.912663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.912869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.912885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.913017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.913208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.913224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.913391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.913624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.913640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.913930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.914101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.914117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.914287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.914465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.914482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.914662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.914938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.914954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.915200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.915386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.915403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.915617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.915785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.915801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.916051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.916285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.916302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.916501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.916730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.916746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.916987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.917205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.917222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.917332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.917564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.917580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.917684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.917861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.917878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.917963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.918170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.918186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.918370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.918630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.918647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.918773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.918955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.918971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.919230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.919432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.919449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.919722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.920015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.920032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.920194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.920455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.920472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.920671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.920863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.920879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.921081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.921267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.921285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.921470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.921653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.921669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.921790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.922035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.922052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.922304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.922537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.922552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.922820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.923018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.923035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.923226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.923423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.923440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.923720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.923921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.923937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.924124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.924286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.924303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.924595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.924896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.924912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.925094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.925266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.925283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.925489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.925646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.925662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.925782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.925972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.925988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.926163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.926405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.926422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.926671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.926845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.926861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.927127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.927414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.927431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.927592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.927765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.927781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.928024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.928269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.928286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.928391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.928550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.928567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.928827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.929022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.929039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.929239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.929504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.929521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.929708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.929972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.929988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.037 qpair failed and we were unable to recover it. 00:32:12.037 [2024-07-12 17:42:50.930151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.930314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.037 [2024-07-12 17:42:50.930331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.930615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.930849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.930865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.931126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.931340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.931356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.931626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.931797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.931814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.932122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.932294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.932311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.932475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.932735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.932751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.932969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.933167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.933184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.933402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.933525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.933542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.933810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.933993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.934009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.934177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.934437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.934454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.934736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.934841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.934857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.935119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.935301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.935318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.935485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.935665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.935681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.935898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.936131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.936148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.936406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.936507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.936523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.936803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.937097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.937113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.937224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.937325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.937343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.937610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.937789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.937805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.938063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.938320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.938337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.938457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.938575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.938591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.938847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.939024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.939040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.939324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.939569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.939585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.939824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.939997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.940014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.940177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.940417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.940434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.940690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.940849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.940866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.941061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.941305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.941322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.941588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.941850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.941867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.942027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.942294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.942311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.942495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.942656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.942672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.942781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.943060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.943076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.943339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.943602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.943619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.943858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.944121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.944137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.944398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.944638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.944654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.944783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.945036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.945053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.945294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.945558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.945574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.945835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.946080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.946096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.946263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.946495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.946511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.946719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.946976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.946993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.947238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.947502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.947518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.947752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.948012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.948029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.948316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.948479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.948496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.948808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.949065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.949082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.949272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.949448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.949465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.949648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.949884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.949900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.950094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.950349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.950366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.950654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.950884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.950900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.038 [2024-07-12 17:42:50.951169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.951427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.038 [2024-07-12 17:42:50.951444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.038 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.951621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.951884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.951900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.952164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.952418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.952436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.952618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.952746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.952762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.952854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.953029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.953047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.953311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.953497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.953514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.953716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.953964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.953980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.954150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.954328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.954348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.954538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.954640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.954658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.954928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.955087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.955104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.955371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.955535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.955551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.955734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.955907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.955923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.956114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.956385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.956401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.956579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.956663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.956679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.956935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.957046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.957062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.957371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.957606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.957623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.957888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.958153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.958169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.958289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.958453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.958473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.958750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.959011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.959027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.959273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.959540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.959557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.959806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.959926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.959942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.960114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.960342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.960359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.960619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.960803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.960820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.961058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.961265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.961282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.961392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.961624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.961641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.961831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.962018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.962034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.962277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.962497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.962514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.962699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.962983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.963002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.963199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.963378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.963394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.963514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.963693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.963710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.963969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.964207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.964223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.964496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.964604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.964620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.964784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.965015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.965032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.965141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.965399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.965416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.965668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.965755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.965772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.965944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.966175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.966192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.966376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.966627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.966644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.966903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.967158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.967177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.967423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.967681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.967697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.967906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.968115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.968131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.968329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.968584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.968600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.968863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.969129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.969145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.969339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.969509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.969526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.969694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.969981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.969998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.970189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.970483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.970499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.970695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.970799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.970815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.970977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.971227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.971243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.971542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.971800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.971816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.972023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.972195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.972211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.972341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.972507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.972523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.972794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.972961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.972978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.039 qpair failed and we were unable to recover it. 00:32:12.039 [2024-07-12 17:42:50.973155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.973401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.039 [2024-07-12 17:42:50.973418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.973518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.973783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.973799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.974112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.974228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.974244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.974425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.974664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.974681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.974953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.975131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.975148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.975351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.975511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.975527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.975779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.976040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.976056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.976272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.976464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.976480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.976571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.976753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.976769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.976933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.977218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.977234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.977595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.977830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.977846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.978126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.978321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.978338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.978504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.978671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.978687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.978928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.979118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.979134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.979373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.979581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.979598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.979758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.979975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.979991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.980192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.980370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.980387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.980578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.980814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.980830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.981011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.981226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.981242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.981457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.981708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.981724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.981913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.982074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.982091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.982304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.982484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.982500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.982663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.982848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.040 [2024-07-12 17:42:50.982864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.040 qpair failed and we were unable to recover it. 00:32:12.040 [2024-07-12 17:42:50.983061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.983312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.983331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.314 qpair failed and we were unable to recover it. 00:32:12.314 [2024-07-12 17:42:50.983518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.983687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.983706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.314 qpair failed and we were unable to recover it. 00:32:12.314 [2024-07-12 17:42:50.983906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.984079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.984095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.314 qpair failed and we were unable to recover it. 00:32:12.314 [2024-07-12 17:42:50.984365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.984476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.984492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.314 qpair failed and we were unable to recover it. 00:32:12.314 [2024-07-12 17:42:50.984594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.984752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.984769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.314 qpair failed and we were unable to recover it. 00:32:12.314 [2024-07-12 17:42:50.984943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.985131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.985148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.314 qpair failed and we were unable to recover it. 00:32:12.314 [2024-07-12 17:42:50.985265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.985432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.985448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.314 qpair failed and we were unable to recover it. 00:32:12.314 [2024-07-12 17:42:50.985716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.985839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.314 [2024-07-12 17:42:50.985855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.986025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.986188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.986204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.986484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.986601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.986618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.986895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.987015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.987031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.987193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.987431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.987447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.987624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.987911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.987927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.988119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.988415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.988432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.988706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.988917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.988933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.989138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.989313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.989330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.989433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.989720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.989736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.989901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.990198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.990214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.990429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.990676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.990694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.990871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.991128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.991144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.991330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.991532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.991548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.991780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.991956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.991972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.992224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.992390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.992407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.992612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.992872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.992888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.992993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.993154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.993171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.993348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.993541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.993558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.993677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.993854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.993871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.994136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.994306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.994324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.994567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.994728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.994744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.994925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.995113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.995130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.995324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.995435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.995451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.995580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.995751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.995767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.995858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.995970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.995987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.996155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.996399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.996416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.996734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.996896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.996912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.997104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.997302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.997319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.997419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.997581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.997598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.997831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.998092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.998108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.998327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.998599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.998616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.998812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.999120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.999136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.999310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.999503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.315 [2024-07-12 17:42:50.999521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.315 qpair failed and we were unable to recover it. 00:32:12.315 [2024-07-12 17:42:50.999680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:50.999944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:50.999961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.000136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.000341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.000358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.000539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.000642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.000659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.000900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.001084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.001100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.001267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.001467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.001483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.001651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.001832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.001848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.002139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.002419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.002436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.002594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.002681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.002698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.002859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.003117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.003133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.003327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.003513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.003529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.003709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.003981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.003998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.004097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.004363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.004380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.004543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.004724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.004741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.004903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.005087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.005104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.005278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.005461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.005478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.005574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.005851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.005867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.006068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.006152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.006169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.006285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.006521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.006537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.006640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.006858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.006874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.006982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.007155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.007172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.007408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.007582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.007598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.007684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.007873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.007889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.008150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.008324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.008340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.008597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.008803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.008820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.009009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.009243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.009265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.009442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.009627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.009644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.009844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.010114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.010130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.010394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.010569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.010586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.010703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.010991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.011007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.011193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.011322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.011339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.316 [2024-07-12 17:42:51.011504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.011647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.316 [2024-07-12 17:42:51.011663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.316 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.011780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.012046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.012063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.012362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.012464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.012482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.012640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.013385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.013420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.013621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.013882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.013898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.014071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.014235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.014251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.014373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.014479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.014495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.014660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.014761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.014777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.014940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.015174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.015192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.015486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.015595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.015612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.015917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.016128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.016161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.016314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.016506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.016522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.016776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.017159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.017192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.017524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.017727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.017744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.017927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.018125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.018142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.018370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.018626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.018643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.018823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.018970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.019003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.019199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.019400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.019433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.019716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.019950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.019981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.020115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.020373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.020406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.020635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.020817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.020848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.021061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.021286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.021319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.021554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.021767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.021799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.022089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.022316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.022336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.022447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.022566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.022583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.022673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.022865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.022881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.023037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.023236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.023294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.023596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.023795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.023827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.024041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.024225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.024241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.025625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.025938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.025959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.026205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.026320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.026338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.317 qpair failed and we were unable to recover it. 00:32:12.317 [2024-07-12 17:42:51.026508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.317 [2024-07-12 17:42:51.026616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.026632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.026844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.027099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.027140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.027277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.027450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.027488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.027688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.027978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.028009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.028161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.028294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.028337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.028520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.028629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.028660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.028782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.030152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.030186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.030401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.030545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.030577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.030875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.031087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.031118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.031275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.031420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.031453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.031739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.031923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.031940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.032138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.032331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.032348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.032529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.032636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.032657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.032820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.033136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.033167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.033416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.033619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.033661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.033850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.033966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.033982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.034272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.034430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.034461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.034655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.034868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.034899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.035107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.035228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.035272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.035509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.035814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.035850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.036066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.036332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.036363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.036580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.036720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.036752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.036990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.037211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.037241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.037563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.037771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.037788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.037995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.038222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.038279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.318 qpair failed and we were unable to recover it. 00:32:12.318 [2024-07-12 17:42:51.038551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.038664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.318 [2024-07-12 17:42:51.038681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.038924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.039200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.039216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.039482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.039663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.039680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.039789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.039914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.039930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.040015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.040179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.040195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.040373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.040515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.040532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.040800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.041058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.041074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.041269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.041363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.041379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.041499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.041838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.041855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.041988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.042188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.042205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.042439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.042603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.042619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.042819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.043034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.043050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.043312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.043491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.043507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.043669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.043958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.043975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.044230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.044464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.044482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.044605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.044783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.044799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.044995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.045268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.045285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.045528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.045773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.045789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.046032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.046192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.046208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.046472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.046657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.046674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.046859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.047072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.047088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.047352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.047528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.047545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.047754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.047963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.047980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.048102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.048307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.048324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.048526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.048717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.048734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.048948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.049041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.049057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.049320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.049457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.049473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.049731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.050000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.050016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.050114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.050300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.050317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.050607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.050782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.050799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.050984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.051249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.319 [2024-07-12 17:42:51.051275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.319 qpair failed and we were unable to recover it. 00:32:12.319 [2024-07-12 17:42:51.051443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.051634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.051650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.051838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.052122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.052139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.052243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.052509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.052526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.052638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.052905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.052922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.053092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.053263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.053280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.053541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.053797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.053814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.053899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.054099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.054115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.054244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.054480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.054497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.054704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.054828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.054846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.055051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.055227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.055244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.055443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.055632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.055649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.055743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.055916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.055932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.056106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.056373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.056390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.056588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.056789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.056806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.057066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.057288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.057305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.057485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.057663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.057679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.057841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.058086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.058103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.058347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.058528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.058545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.058655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.058910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.058927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.059184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.059421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.059440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.059614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.059799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.059816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.060100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.060265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.060283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.060521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.060628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.060645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.060775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.060939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.060955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.061060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.061157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.061174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.061291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.061559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.061576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.061837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.061953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.061969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.062091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.062379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.062397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.062561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.062735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.062752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.062874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.063123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.063139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.320 [2024-07-12 17:42:51.063343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.063518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.320 [2024-07-12 17:42:51.063535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.320 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.063768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.063961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.063977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.064155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.064402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.064419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.064582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.064759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.064776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.064979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.065147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.065164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.065430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.065668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.065685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.065874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.065985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.066002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.066108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.066269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.066285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.066402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.066657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.066673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.066858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.067034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.067050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.067150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.067239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.067262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.067441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.067676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.067693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.067799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.067906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.067922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.068116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.068281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.068298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.068461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.068585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.068601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.068778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.069048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.069064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.069274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.069431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.069448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.069721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.069900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.069916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.070085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.070243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.070273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.070452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.070576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.070592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.070775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.070954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.070983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.071242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.071364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.071380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.071485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.071681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.071697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.071860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.072029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.072045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.072216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.072314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.072331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.072536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.072640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.072656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.072966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.073070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.073087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.073321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.073446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.073463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.073696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.073790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.073807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.073905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.074063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.074081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.074186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.074444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.074460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.321 qpair failed and we were unable to recover it. 00:32:12.321 [2024-07-12 17:42:51.074557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.321 [2024-07-12 17:42:51.074677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.074693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.074868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.074957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.074971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.075224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.075416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.075433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.075544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.075655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.075671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.075763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.075994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.076010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.076192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.076346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.076363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.076456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.076641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.076657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.076824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.077030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.077047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.077307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.077484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.077500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.077597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.077844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.077860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.078117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.078233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.078249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.078424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.078658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.078674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.078785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.078882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.078898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.078997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.079226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.079242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.079342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.079496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.079512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.079618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.079738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.079754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.079854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.080010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.080026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.080234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.080329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.080345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.080537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.080733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.080750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.080846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.080940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.080956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.081144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.081350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.081366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.081535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.081740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.081756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.082001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.082223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.082239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.082441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.082613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.082630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.082918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.083015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.083031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.083133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.083309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.083326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.083500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.083598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.083616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.083784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.084016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.084033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.084142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.084230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.084246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.084451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.084616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.084632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.084810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.085046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.322 [2024-07-12 17:42:51.085063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.322 qpair failed and we were unable to recover it. 00:32:12.322 [2024-07-12 17:42:51.085222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.085328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.085344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.085578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.085763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.085779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.085882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.085980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.085997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.086176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.086274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.086290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.086452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.086556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.086572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.086744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.086972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.086994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.087084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.087189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.087205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.087303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.087534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.087550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.087710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.087872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.087888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.088060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.088153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.088170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.088289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.088577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.088593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.088851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.089032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.089049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.089249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.089361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.089377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.089610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.089793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.089809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.090041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.090270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.090287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.090462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.090734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.090754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.090999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.091176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.091192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.091365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.091539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.091556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.091832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.092033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.092048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.092157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.092363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.092379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.092489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.092657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.092674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.092831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.092985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.093001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.093178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.093267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.093284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.093457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.093612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.093628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.093818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.093913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.093929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.094022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.094123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.094141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.323 [2024-07-12 17:42:51.094267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.094434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.323 [2024-07-12 17:42:51.094450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.323 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.094611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.094707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.094723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.094980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.095220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.095235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.095342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.095567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.095582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.095690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.095937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.095953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.096045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.096199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.096214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.096480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.096646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.096662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.096767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.097060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.097075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.097301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.097543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.097559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.097640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.097866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.097882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.098090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.098243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.098267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.098464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.098623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.098639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.098810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.098977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.098993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.099090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.099242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.099265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.099423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.099578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.099594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.099856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.100087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.100102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.100265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.100508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.100523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.100636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.100825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.100841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.101027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.101133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.101149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.101319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.101414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.101430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.101546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.101810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.101825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.102054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.102237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.102259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.102426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.102593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.102608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.102761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.102926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.102943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.103170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.103274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.103291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.103517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.103631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.103647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.103805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.103983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.103998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.104169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.104278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.104294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.104464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.104616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.104632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.104788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.104965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.104981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.105105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.105205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.105221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.324 qpair failed and we were unable to recover it. 00:32:12.324 [2024-07-12 17:42:51.105316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.324 [2024-07-12 17:42:51.105395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.105410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.105515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.105686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.105701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.105801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.106049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.106065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.106221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.106484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.106500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.106600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.106778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.106793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.106960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.107212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.107228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.107390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.107551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.107567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.107752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.107921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.107937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.108088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.108276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.108292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.108478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.108630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.108646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.108893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.108973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.108990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.109207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.109363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.109380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.109603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.109792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.109808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.109909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.110080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.110095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.110184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.110291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.110308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.110483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.110568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.110582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.110806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.110923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.110939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.111052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.111239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.111262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.111445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.111555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.111570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.111677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.111870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.111886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.111979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.112266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.112283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.112437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.112592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.112608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.112799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.112885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.112900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.113052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.113136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.113153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.113240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.113417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.113433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.113656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.113878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.113894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.114052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.114208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.114223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.114472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.114703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.114718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.114942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.115105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.115121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.115337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.115434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.115451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.115549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.115722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.325 [2024-07-12 17:42:51.115737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.325 qpair failed and we were unable to recover it. 00:32:12.325 [2024-07-12 17:42:51.115822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.115981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.115996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.116103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.116293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.116309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.116504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.116675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.116691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.116960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.117115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.117131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.117285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.117539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.117554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.117671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.117773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.117789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.117887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.118047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.118063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.118335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.118491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.118507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.118597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.118776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.118792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.118873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.119065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.119081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.119186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.119351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.119367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.119448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.119635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.119650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.119769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.119943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.119959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.120061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.120233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.120249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.120423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.120585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.120600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.120758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.120927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.120943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.121101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.121216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.121232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.121395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.121617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.121632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.121808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.121903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.121919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.122092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.122267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.122283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.122369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.122476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.122492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.122676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.122897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.122912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.123017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.123110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.123125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.123212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.123382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.123398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.123622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.123723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.123739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.123892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.124066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.124082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.124353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.124504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.124520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.124622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.124737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.124753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.124907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.125008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.125023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.125249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.125474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.125490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.326 [2024-07-12 17:42:51.125648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.125803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.326 [2024-07-12 17:42:51.125819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.326 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.126077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.126174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.126190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.126470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.126692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.126707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.126888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.127065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.127080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.127183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.127337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.127353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.127526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.127696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.127711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.127827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.128011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.128026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.128126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.128288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.128304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.128409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.128611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.128627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.128791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.128875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.128890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.129042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.129267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.129283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.129382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.129535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.129550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.129666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.129767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.129783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.129950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.130041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.130057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.130220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.130402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.130418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.130573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.130742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.130757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.131037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.131208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.131224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.131388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.131540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.131555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.131747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.131851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.131866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.132029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.132198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.132213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.132327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.132521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.132536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.132624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.132733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.132749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.132842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.132921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.132954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.133106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.133259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.133275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.133386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.133537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.133552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.133629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.133774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.133789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.133938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.134091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.134107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.327 qpair failed and we were unable to recover it. 00:32:12.327 [2024-07-12 17:42:51.134199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.327 [2024-07-12 17:42:51.134403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.134419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.134583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.134690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.134707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.134898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.135048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.135063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.135224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.135373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.135389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.135479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.135644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.135659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.135757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.135950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.135966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.136127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.136228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.136244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.136511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.136692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.136708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.136817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.136966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.136982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.137212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.137374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.137390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.137550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.137790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.137805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.137911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.138073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.138092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.138192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.138353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.138370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.138452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.138693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.138707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.138887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.138988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.139004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.139176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.139398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.139414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.139609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.139776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.139791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.140013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.140182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.140197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.140283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.140435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.140450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.140614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.140786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.140802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.141028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.141122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.141137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.141261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.141442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.141461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.141555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.144511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.144527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.144727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.144893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.144908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.145069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.145164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.145179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.145351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.145460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.145475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.145624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.145785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.145800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.145955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.146062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.146078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.146179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.146342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.146359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.146602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.146712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.146728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.146996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.147110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.328 [2024-07-12 17:42:51.147126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.328 qpair failed and we were unable to recover it. 00:32:12.328 [2024-07-12 17:42:51.147235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.147389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.147408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.147496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.147657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.147672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.147858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.148104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.148119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.148291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.148389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.148404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.148599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.148710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.148726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.148876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.148972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.148987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.149146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.149291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.149307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.149395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.149471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.149485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.149571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.149722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.149738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.149891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.149975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.149990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.150164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.150317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.150336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.150505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.150675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.150690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.150852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.151000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.151015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.151166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.151264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.151280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.151446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.151663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.151678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.151839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.151961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.151976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.152176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.152339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.152355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.152511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.152596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.152612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.152815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.152994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.153010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.153167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.153269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.153285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.153460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.153704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.153719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.153824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.153906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.153921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.154083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.154329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.154344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.154437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.154654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.154669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.154830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.154991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.155006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.155100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.155260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.155276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.155550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.155698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.155713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.155874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.156034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.156049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.156139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.156243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.156264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.156422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.156589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.156604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.329 [2024-07-12 17:42:51.156704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.156871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.329 [2024-07-12 17:42:51.156886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.329 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.157043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.157119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.157134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.157224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.157438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.157454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.157534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.157680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.157696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.157938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.158179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.158194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.158384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.158470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.158485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.158568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.158815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.158831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.158940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.159132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.159147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.159249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.159338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.159352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.159517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.159763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.159779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.159951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.160119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.160135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.160358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.160521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.160536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.160633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.160816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.160831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.160945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.161108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.161124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.161291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.161451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.161466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.161655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.161818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.161833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.161917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.162075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.162090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.162247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.162433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.162449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.162528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.162768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.162783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.162937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.163021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.163036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.163295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.163368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.163383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.163557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.163626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.163641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.163814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.163909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.163924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.164107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.164270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.164285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.164442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.164606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.164621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.164766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.164933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.164947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.165129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.165373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.165389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.165557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.165705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.165721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.165911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.166010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.166026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.166187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.166373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.166388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.166492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.166595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.166610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.330 qpair failed and we were unable to recover it. 00:32:12.330 [2024-07-12 17:42:51.166810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.330 [2024-07-12 17:42:51.166997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.167012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.167094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.167275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.167291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.167474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.167643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.167659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.167745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.167844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.167859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.167957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.168114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.168129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.168373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.168533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.168548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.168702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.168808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.168824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.169060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.169247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.169278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.169429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.169540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.169555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.169715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.169880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.169895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.170126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.170295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.170311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.170414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.170574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.170589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.170864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.170962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.170977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.171200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.171292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.171308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.171564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.171668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.171683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.171846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.171946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.171961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.172184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.172341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.172356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.172458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.172607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.172623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.172812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.173027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.173042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.173140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.173301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.173317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.173426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.173516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.173531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.173709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.173857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.173872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.174086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.174248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.174270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.174351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.174427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.174442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.174550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.174795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.174810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.174960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.175136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.175151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.175252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.175429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.175444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.331 qpair failed and we were unable to recover it. 00:32:12.331 [2024-07-12 17:42:51.175623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.331 [2024-07-12 17:42:51.175864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.175878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.176114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.176233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.176248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.176498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.176666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.176681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.176795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.176954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.176969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.177122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.177202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.177218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.177318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.177491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.177506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.177598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.177707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.177723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.177889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.178110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.178125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.178278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.178440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.178455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.178611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.178720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.178735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.178818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.178979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.178994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.179097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.179207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.179222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.179391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.179488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.179503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.179598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.179748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.179764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.179928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.180012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.180029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.180264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.180439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.180454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.180537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.180683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.180698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.180895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.181058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.181073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.181223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.181325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.181341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.181437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.181542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.181558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.181668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.181853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.181869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.181983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.182088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.182103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.182268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.182435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.182450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.182544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.182694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.182709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.182799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.182916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.182931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.183028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.183123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.183136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.183287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.183514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.183530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.183610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.183700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.183716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.183811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.184319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.184342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.184445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.184594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.184609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.332 [2024-07-12 17:42:51.184706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.184871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.332 [2024-07-12 17:42:51.184886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.332 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.185143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.185230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.185245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.185475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.185563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.185579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.185692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.185876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.185892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.186067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.186229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.186246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.186352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.186462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.186478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.186584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.186769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.186784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.186953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.187044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.187060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.187164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.187280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.187297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.187393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.187482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.187497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.187658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.187820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.187835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.188032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.188124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.188139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 124731 Killed "${NVMF_APP[@]}" "$@" 00:32:12.333 [2024-07-12 17:42:51.188294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.188382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.188397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.188558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.188773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.188788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 17:42:51 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:32:12.333 [2024-07-12 17:42:51.188888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.189125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.189140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 17:42:51 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:12.333 [2024-07-12 17:42:51.189230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.189339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.189355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 17:42:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:12.333 [2024-07-12 17:42:51.189507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.189695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.189710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 17:42:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:12.333 [2024-07-12 17:42:51.189876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 17:42:51 -- common/autotest_common.sh@10 -- # set +x 00:32:12.333 [2024-07-12 17:42:51.190033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.190049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.190132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.190288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.190304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.190414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.190588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.190603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.190841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.190995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.191010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.191167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.191327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.191343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.191502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.191604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.191620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.191720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.191880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.191895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.191983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.192148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.192163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.192264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.192350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.192365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.192520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.192611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.192626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.192730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.192884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.192900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.193055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.193153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.193168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.193320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.193402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.193417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.333 qpair failed and we were unable to recover it. 00:32:12.333 [2024-07-12 17:42:51.193604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.333 [2024-07-12 17:42:51.193683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.193698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.193787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.193868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.193883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.193986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.194141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.194156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.194344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.194502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.194517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.194606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.194696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.194711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.194861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.195110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.195125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.195231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.195415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.195431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.195532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.195610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.195625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.195718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.195825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.195840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.195985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.196061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.196076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 17:42:51 -- nvmf/common.sh@469 -- # nvmfpid=125713 00:32:12.334 [2024-07-12 17:42:51.196293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.196387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.196403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 17:42:51 -- nvmf/common.sh@470 -- # waitforlisten 125713 00:32:12.334 17:42:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:12.334 [2024-07-12 17:42:51.196628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.196831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.196846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 17:42:51 -- common/autotest_common.sh@819 -- # '[' -z 125713 ']' 00:32:12.334 [2024-07-12 17:42:51.196943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.197031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.197046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 17:42:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.334 [2024-07-12 17:42:51.197214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.197360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.197376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 17:42:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:12.334 [2024-07-12 17:42:51.197474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.197637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.197654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 17:42:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.334 [2024-07-12 17:42:51.197762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.197844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.197860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 17:42:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:12.334 [2024-07-12 17:42:51.197965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 17:42:51 -- common/autotest_common.sh@10 -- # set +x 00:32:12.334 [2024-07-12 17:42:51.198246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.198452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.198643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.198840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.198943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.199040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.199124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.199139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.199233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.199418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.199434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.199534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.199710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.199725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.199811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.199911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.199926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.200018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.200167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.200182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.200280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.200450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.200465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.200706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.200854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.200868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.334 [2024-07-12 17:42:51.200957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.201123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.334 [2024-07-12 17:42:51.201138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.334 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.201289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.201454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.201469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.201639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.201749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.201767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.201922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.202015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.202030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.202104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.202190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.202204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.202323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.202436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.202451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.202534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.202692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.202707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.202870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.203052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.203066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.203212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.203358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.203374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.203525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.203672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.203687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.203792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.203874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.203889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.204052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.204205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.204220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.204370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.204472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.204489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.204569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.204688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.204703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.204808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.204952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.204967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.205055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.205228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.205243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.205431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.205531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.205546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.205624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.205699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.205714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.205876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.206080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.206095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.206174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.206316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.206332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.206517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.206608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.206623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.206833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.206980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.206995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.207088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.207233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.207251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.207362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.207570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.207584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.207681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.207773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.207788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.207935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.208098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.208113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.208215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.208384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.208400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.208499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.208584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.208600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.208770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.208925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.208940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.209045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.209133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.209148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.335 qpair failed and we were unable to recover it. 00:32:12.335 [2024-07-12 17:42:51.209229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.335 [2024-07-12 17:42:51.209321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.209335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.209414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.209513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.209528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.209627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.209716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.209731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.209887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.209972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.209987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.210161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.210321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.210337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.210436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.210550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.210564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.210658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.210740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.210755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.210908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.211157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.211172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.211273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.211359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.211374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.211462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.211636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.211651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.211736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.211882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.211897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.211975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.212135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.212150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.212302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.212454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.212470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.212656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.212869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.212884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.212989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.213145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.213160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.213354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.213459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.213475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.213587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.213735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.213750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.213913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.214004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.214020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.214194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.214276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.214291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.214380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.214470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.214485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.214556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.214640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.214655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.214802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.215023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.215038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.215263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.215354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.215369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.215612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.215710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.215725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.215944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.216100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.216115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.216316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.216530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.216545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.216713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.216889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.216904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.216988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.217062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.336 [2024-07-12 17:42:51.217077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.336 qpair failed and we were unable to recover it. 00:32:12.336 [2024-07-12 17:42:51.217244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.217436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.217451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.217560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.217663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.217678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.217777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.217973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.217988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.218150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.218245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.218267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.218360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.218453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.218468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.218576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.218667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.218682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.218770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.218958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.218973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.219146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.219241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.219263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.219422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.219586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.219601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.219694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.219921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.219936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.220103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.220267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.220283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.220378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.220457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.220472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.220557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.220654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.220668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.220750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.220847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.220862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.220942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.221096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.221111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.221218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.221390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.221405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.221557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.221735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.221750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.221903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.221986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.222000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.222150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.222230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.222245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.222343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.222509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.222524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.222612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.222734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.222748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.222966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.223214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.223229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.223328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.223408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.223423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.223512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.223608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.223623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.223904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.224126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.224141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.224226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.224308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.224323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.224433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.224613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.224628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.224706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.224919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.224935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.225018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.225102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.225117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.225229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.225346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.225361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.337 qpair failed and we were unable to recover it. 00:32:12.337 [2024-07-12 17:42:51.225522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.225619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.337 [2024-07-12 17:42:51.225634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.225783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.225892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.225906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.225986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.226204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.226219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.226443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.226597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.226612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.226792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.226953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.226968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f093c000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.227074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.227166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.227179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.227339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.227443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.227453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.227549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.227639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.227650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.227791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.227999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.228088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.228327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.228521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.228758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.228921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.228993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.229079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.229089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.229179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.229358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.229369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.229454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.229539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.229548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.229693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.229790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.229800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.229941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.230244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.230470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.230662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.230890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.230968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.231128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.231202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.231214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.231392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.231547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.231557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.231633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.231815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.231827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.232006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.232151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.232162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.232300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.232452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.232464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.232615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.232761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.232772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.232916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.233071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.233082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.233233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.233306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.233316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.233403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.233501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.233512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.338 qpair failed and we were unable to recover it. 00:32:12.338 [2024-07-12 17:42:51.233725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.338 [2024-07-12 17:42:51.233863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.233873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.234055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.234152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.234162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.234378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.234445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.234455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.234551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.234687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.234696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.234775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.234858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.234868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.235032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.235181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.235190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.235327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.235439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.235451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.235598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.235757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.235769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.235857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.236003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.236013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.236101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.236248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.236263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.236355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.236490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.236501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.236585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.236667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.236679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.236905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.237050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.237060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.237214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.237372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.237383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.237452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.237592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.237603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.237870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.237937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.237947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.238023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.238090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.238099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.238188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.238286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.238297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.238437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.238652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.238663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.238755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.238918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.238929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.239070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.239237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.239249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.239403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.239481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.239492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.239667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.239750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.239760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.239844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.240028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.240039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.240144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.240291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.240305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.240375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.240526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.240538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.240690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.240775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.240785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.240880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.241035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.241047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.241194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.241267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.241280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.241371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.241549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.241559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.241650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.241719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.339 [2024-07-12 17:42:51.241729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.339 qpair failed and we were unable to recover it. 00:32:12.339 [2024-07-12 17:42:51.241868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.242078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.242087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.242229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.242390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.242401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.242620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.242791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.242802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.242904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.243117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.243130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.243305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.243518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.243529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.243670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.243807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.243818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.243909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.244075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.244087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.244080] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:12.340 [2024-07-12 17:42:51.244131] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.340 [2024-07-12 17:42:51.244151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.244372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.244381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.244474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.244623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.244633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.244775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.244852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.244862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.245011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.245096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.245106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.245315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.245450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.245460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.245548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.245624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.245636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.245780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.245874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.245885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.246043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.246224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.246235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.246330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.246407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.246418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.246677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.246756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.246767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.246859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.247068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.247079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.247246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.247408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.247420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.247582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.247675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.247686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.247768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.247911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.247922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.248153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.248391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.248403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.248553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.248644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.248654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.248912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.249168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.249179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.249371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.249482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.249492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.340 qpair failed and we were unable to recover it. 00:32:12.340 [2024-07-12 17:42:51.249698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.340 [2024-07-12 17:42:51.249932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.249942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.250039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.250179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.250190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.250349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.250557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.250569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.250756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.250921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.250932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.251100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.251194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.251205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.251282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.251361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.251372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.251520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.251670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.251681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.251819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.251908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.251918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.252062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.252319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.252330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.252398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.252475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.252485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.252693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.252772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.252783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.252942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.253083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.253094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.253246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.253415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.253427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.253510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.253592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.253603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.253724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.253827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.253838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.254069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.254152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.254163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.254342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.254537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.254548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.254766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.254975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.254986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.255089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.255229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.255240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.255403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.255544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.255555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.255702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.255855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.255866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.256006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.256191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.256202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.256275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.256419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.256430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.256510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.256657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.256668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.256808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.256897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.256908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.256991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.257271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.257283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.257494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.257730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.257741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.257834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.257970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.257980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.258179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.258269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.258281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.258568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.258750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.258760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.341 [2024-07-12 17:42:51.258986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.259201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.341 [2024-07-12 17:42:51.259212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.341 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.259420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.259576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.259587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.259740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.259971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.259981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.260070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.260207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.260218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.260311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.260415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.260425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.260663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.260762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.260773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.260916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.261011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.261023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.261095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.261165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.261175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.261360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.261433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.261444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.261588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.261666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.261677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.261767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.262019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.262031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.262342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.262424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.262435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.262519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.262683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.262694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.262796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.262903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.262913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.263096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.263258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.263271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.263414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.263499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.263510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.263641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.263726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.263738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.263818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.263962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.263974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.264051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.264138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.264149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.264311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.264461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.264473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.264547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.264694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.264707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.264795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.264959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.264971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.265142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.265245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.265266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.265355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.265433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.265444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.265524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.265691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.265702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.265782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.265866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.265876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.266047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.266121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.266132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.266222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.266372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.342 [2024-07-12 17:42:51.266384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.342 qpair failed and we were unable to recover it. 00:32:12.342 [2024-07-12 17:42:51.266644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.266790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.266802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.267019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.267109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.267121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.267330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.267435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.267446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.267531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.267619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.267629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.267870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.268016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.268027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.268185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.268347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.268359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.268535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.268661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.268671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.268831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.268987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.268997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.269137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.269280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.269291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.269531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.269760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.269770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.269856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.270032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.270042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.270118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.270268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.270277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.270364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.270429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.270438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.270513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.270705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.270714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.270805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.271024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.271034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.271118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.271273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.271283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.271370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.271456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.271464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.271701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.271842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.271851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.272044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.272224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.272232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.272376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.272479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.272488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.272701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.272806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.272814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.272892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.273159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.273169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.273276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.273432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.273442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.605 qpair failed and we were unable to recover it. 00:32:12.605 [2024-07-12 17:42:51.273600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.273673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.605 [2024-07-12 17:42:51.273682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.273768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.273921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.273931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.274085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.274265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.274274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.274360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.274536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.274545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.274758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.274909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.274918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.274997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.275086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.275096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.275242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.275390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.275400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.275634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.275724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.275733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.275898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.275983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.275992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.276081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.276164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.276174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.276270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.276411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.276421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.276579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.276717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.276727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.276806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.276977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.276988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.277187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.277345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.277356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.277513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.277587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.277598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.277755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.277829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.277838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.277974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.278119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.278127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.278269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.278441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.278449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.278601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.278676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.278685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.278775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.278923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.278931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.279011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.279159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.279168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.279311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.279394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.279403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.279573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.279658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.279667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.279756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.279858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.279866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.280057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.280143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.280152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.606 [2024-07-12 17:42:51.299355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.299598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.299606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.299848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.299993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.300001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.300214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.300372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.300381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.300543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.300647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.300655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.300736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.300836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.300854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.301003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.301104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.606 [2024-07-12 17:42:51.301113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.606 qpair failed and we were unable to recover it. 00:32:12.606 [2024-07-12 17:42:51.315559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.315794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.315804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.315971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.316157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.316168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.316297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.316400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.316408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.316544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.316636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.316645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.316793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.316894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.316903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.317001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.317180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.317189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.317320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.317408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.317417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.317579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.317697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.317706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.317785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.317873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.317881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.318064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.318203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.318211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.318319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.318487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.318495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.318583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.318672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.318680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.318763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.318933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.318942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.319207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.319345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.319355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.319591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.319677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.319686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.319841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.319925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.319934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.320009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.320087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.320099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.320295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.320460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.320468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.320567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.320645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.320654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.320839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.320904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.320913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.320993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.321133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.321142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.321220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.321367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.321377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.321452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.321597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.321605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.321784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.321875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.321884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.321967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.322116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.322125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.322210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.322298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.322308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.322390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.322501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.322512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.322584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.322820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.322829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.322918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.323004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.607 [2024-07-12 17:42:51.323013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.607 qpair failed and we were unable to recover it. 00:32:12.607 [2024-07-12 17:42:51.323223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.323381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.323390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.323473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.323553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.323561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.323712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.323802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.323811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.323890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.323956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.323964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.324106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.324315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.324326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.324418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.324486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.324494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.324562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.324632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.324640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.324712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.324790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.324802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.324889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.325143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.325317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.325490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.325788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.325869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.325950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.326101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.326310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.326468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.326627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.326703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.326887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.327027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.327035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.327173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.327260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.327270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.327409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.327550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.327559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.327654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.327805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.327815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.328013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.328181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.328332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.328631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.328787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.328867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.328973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.329140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.329150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.329224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.329357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.608 [2024-07-12 17:42:51.329367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.608 qpair failed and we were unable to recover it. 00:32:12.608 [2024-07-12 17:42:51.329558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.329625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.329634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.329780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.330041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.330050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.330130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.330210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.330219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.330377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.330533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.330542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.330622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.330775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.330785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.330879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.331068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.331077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.331232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.331378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.331389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.331601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.331753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.331764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.331838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.331993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.332002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.332146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.332289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.332298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.332386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.332468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.332476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.332569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.332732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.332743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.333003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.333072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.333081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.333238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.333393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.333403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.333496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.333560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.333568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.333728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.333810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.333819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.333956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.334022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.334031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.334240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.334338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.334347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.334512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.334654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.334663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.609 qpair failed and we were unable to recover it. 00:32:12.609 [2024-07-12 17:42:51.334749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.609 [2024-07-12 17:42:51.363372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:12.609 [2024-07-12 17:42:51.425784] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:12.609 [2024-07-12 17:42:51.426051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.609 [2024-07-12 17:42:51.426073] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.609 [2024-07-12 17:42:51.426093] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.609 [2024-07-12 17:42:51.426231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:12.609 [2024-07-12 17:42:51.426276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:12.609 [2024-07-12 17:42:51.426397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:12.609 [2024-07-12 17:42:51.426390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:12.875 [2024-07-12 17:42:51.734285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.734330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.734576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.734790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.734801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.734878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.735026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.735036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.735132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.735389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.735400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.735503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.735658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.735668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.735882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.736092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.736102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.736248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.736402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.736413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.736554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.736697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.736711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.736851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.737026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.737036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.737264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.737352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.737362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.737588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.737660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.737670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.737752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.737986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.737996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.738085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.738231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.738241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.738453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.738595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.738605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.738697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.738778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.738788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.738972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.739046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.739056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.739193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.739351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.739362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.739608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.739689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.739701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.739803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.739868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.739878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.739952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.740213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.740223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.740332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.740409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.740419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.740633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.740865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.740875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.741007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.741096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.741107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.741246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.741337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.741348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.741486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.741679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.741688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.875 qpair failed and we were unable to recover it. 00:32:12.875 [2024-07-12 17:42:51.741778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.875 [2024-07-12 17:42:51.741858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.741868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.741953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.742102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.742112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.742177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.742329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.742339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.742489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.742700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.742710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.742803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.742889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.742899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.743038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.743249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.743262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.743425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.743530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.743541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.743686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.743775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.743784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.743882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.743972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.743982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.744062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.744187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.744197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.744334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.744487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.744498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.744583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.744669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.744678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.744761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.745022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.745033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.745245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.745323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.745333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.745413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.745551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.745561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.745708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.745854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.745864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.746023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.746166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.746176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.746261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.746337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.746347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.746578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.746657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.746668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.746760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.746846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.746856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.746994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.747060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.747069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.747226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.747302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.747312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.747461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.747527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.747537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.747747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.747846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.747856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.748063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.748279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.748289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.748540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.748633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.748643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.748780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.748989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.748998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.749206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.749359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.749370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.749515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.749594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.749604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.876 [2024-07-12 17:42:51.749759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.749838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.876 [2024-07-12 17:42:51.749848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.876 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.750060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.750210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.750219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.750308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.750569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.750579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.750737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.750833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.750843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.750918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.750986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.750996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.751152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.751340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.751350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.751560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.751696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.751706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.751779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.751993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.752003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.752238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.752311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.752321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.752475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.752682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.752692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.752832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.752914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.752924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.753077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.753243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.753257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.753392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.753540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.753550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.753719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.753865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.753875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.753949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.754029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.754040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.754148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.754288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.754300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.754534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.754689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.754699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.754907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.755065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.755074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.755164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.755373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.755384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.755538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.755606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.755615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.755765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.755997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.756007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.756146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.756413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.756424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.756572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.756722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.756732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.756870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.757029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.757039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.757181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.757274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.757284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.757422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.757559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.757569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.757718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.757806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.757816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.757897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.758035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.758046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.758114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.758243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.758257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.758389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.758537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.758547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.758637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.758775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.877 [2024-07-12 17:42:51.758784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.877 qpair failed and we were unable to recover it. 00:32:12.877 [2024-07-12 17:42:51.758848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.759017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.759028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.759262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.759408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.759418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.759580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.759666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.759676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.759812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.760091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.760101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.760318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.760459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.760469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.760552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.760629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.760639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.760728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.760826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.760836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.760987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.761144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.761154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.761238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.761383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.761393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.761581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.761649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.761659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.761741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.761814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.761824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.762035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.762196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.762206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.762345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.762498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.762508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.762646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.762735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.762745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.762987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.763142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.763152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.763327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.763532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.763542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.763622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.763839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.763849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.764102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.764185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.764195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.764360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.764427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.764437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.764589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.764737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.764747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.764988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.765224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.765233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.765370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.765516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.765527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.765773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.765841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.765851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.765945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.766038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.766048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.766121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.766353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.766364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.766460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.766670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.766679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.766757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.766940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.766950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.767028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.767097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.767107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.767274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.767492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.767502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.767649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.767808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.767818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.878 [2024-07-12 17:42:51.767963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.768099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.878 [2024-07-12 17:42:51.768109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.878 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.768250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.768418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.768429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.768514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.768595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.768605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.768782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.768963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.768974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.769126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.769212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.769222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.769360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.769522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.769532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.769738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.769910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.769920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.770006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.770228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.770239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.770310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.770482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.770492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.770574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.770661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.770670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.770814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.771022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.771032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.771190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.771338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.771348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.771499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.771638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.771648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.771800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.771961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.771971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.772168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.772317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.772327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.772422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.772497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.772507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.772663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.772870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.772880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.772966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.773124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.773134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.773209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.773369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.773379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.773538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.773683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.773693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.773777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.773912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.773922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.774022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.774191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.774201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.774288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.774522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.774532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.774599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.774746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.774756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.879 [2024-07-12 17:42:51.774961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.775124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.879 [2024-07-12 17:42:51.775134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.879 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.775355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.775421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.775430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.775612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.775696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.775706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.775845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.776052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.776062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.776298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.776445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.776455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.776546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.776700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.776709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.776817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.776907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.776916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.777152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.777357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.777367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.777458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.777597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.777607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.777712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.777956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.777968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.778147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.778297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.778307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.778511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.778664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.778674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.778823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.778974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.778984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.779124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.779384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.779394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.779594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.779750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.779760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.779918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.780125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.780135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.780234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.780301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.780311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.780466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.780614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.780624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.780853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.781058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.781067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.781158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.781320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.781332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.781515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.781656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.781667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.781818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.782049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.782059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.782216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.782282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.782292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.782441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.782588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.782598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.782671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.782807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.782817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.782916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.783014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.783024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.783099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.783238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.783248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.783388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.783472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.783481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.783619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.783775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.783784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.783885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.784021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.784034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.784185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.784390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.784400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.880 [2024-07-12 17:42:51.784569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.784650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.880 [2024-07-12 17:42:51.784660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.880 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.784810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.784947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.784957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.785168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.785373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.785384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.785618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.785770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.785780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.785953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.786037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.786047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.786262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.786495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.786505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.786608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.786766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.786776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.786863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.787030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.787040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.787286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.787382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.787394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.787628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.787716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.787726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.787815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.787993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.788093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.788251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.788422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.788647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.788867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.788975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.789113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.789123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.789207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.789285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.789296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.789447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.789586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.789596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.789679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.789763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.789773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.789861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.790066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.790076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.790228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.790374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.790385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.790534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.790706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.790716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.790867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.790934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.790943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.791198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.791368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.791378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.791480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.791569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.791579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.791731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.791957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.791967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.792059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.792207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.792217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.792355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.792588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.792598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.792754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.792856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.792866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.792968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.793047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.793057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.793199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.793433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.793443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.881 qpair failed and we were unable to recover it. 00:32:12.881 [2024-07-12 17:42:51.793602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.881 [2024-07-12 17:42:51.793741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.793751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.793844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.794004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.794014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.794170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.794239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.794249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.794458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.794551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.794560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.794768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.794840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.794850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.794930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.795108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.795118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.795326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.795397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.795407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.795492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.795628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.795637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.795805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.795898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.795907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.796068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.796204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.796214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.796297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.796447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.796457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.796594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.796737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.796746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.796966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.797170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.797180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.797271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.797477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.797487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.797629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.797707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.797717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.797802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.797937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.797947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.798027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.798164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.798175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.798258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.798409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.798419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.798495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.798662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.798672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.798885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.799050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.799060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.799227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.799436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.799446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.799530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.799627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.799637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.799714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.799795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.799805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.799875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.800181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.800412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.800577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.800868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.800967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.801117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.801280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.801291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.801396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.801578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.801588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.801749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.801879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.882 [2024-07-12 17:42:51.801889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.882 qpair failed and we were unable to recover it. 00:32:12.882 [2024-07-12 17:42:51.801977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.802065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.802076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.802246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.802332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.802343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.802538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.802698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.802708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.802916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.803124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.803134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.803207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.803363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.803373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.803608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.803829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.803839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.803923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.804137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.804147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.804303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.804370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.804380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.804561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.804702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.804712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.804860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.805017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.805027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.805180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.805353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.805363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.805501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.805706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.805716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.805951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.806022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.806032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.806170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.806334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.806344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.806492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.806576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.806586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.806664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.806743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.806753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.806889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.807034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.807044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.807233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.807333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.807343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.807422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.807609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.807618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.807825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.807892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.807902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.808052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.808285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.808307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.808396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.808554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.808565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.808701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.808908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.808919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.809012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.809157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.809167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.809249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.809407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.809417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.809559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.809727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.809737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.883 qpair failed and we were unable to recover it. 00:32:12.883 [2024-07-12 17:42:51.809830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.883 [2024-07-12 17:42:51.809913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.809923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.810099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.810326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.810336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.810513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.810615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.810624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.810767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.810834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.810844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.810917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.811072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.811082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.811227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.811311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.811321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.811468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.811560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.811570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.811776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.811846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.811856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.811956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.812044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.812053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.812261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.812408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.812419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.812510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.812606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.812616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.812707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.812779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.812789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.813079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.813151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.813161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.813312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.813448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.813459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.813667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.813750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.813760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.813898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.813978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.813988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.814127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.814274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.814285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.814519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.814616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.814626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.814783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.814940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.814950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.815097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.815236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.815246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.815417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.815501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.815511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.815741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.815883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.815893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.816121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.816274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.816284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.816441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.816591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.816602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.816672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.816906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.816916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.817004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.817169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.817180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.817276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.817355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.817365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.817510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.817604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.817615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.817752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.817835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.817845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.817923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.818100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.818109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.884 [2024-07-12 17:42:51.818201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.818350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.884 [2024-07-12 17:42:51.818361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.884 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.818447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.818603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.818613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.818679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.818825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.818835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.818916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.819086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.819096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.819179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.819386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.819396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.819468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.819604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.819614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.819697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.819936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.819946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.820096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.820251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.820265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.820340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.820549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.820559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.820653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.820750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.820760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.820969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.821143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.821153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.821291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.821443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.821453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.821543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.821694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.821704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.821788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.821955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.821965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.822050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.822154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.822164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.822244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.822320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.822331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.822470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.822617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.822627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.822708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.822772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.822782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.822923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.823061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.823071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.823304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.823471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.823481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.823723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.823872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.823883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.824035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.824188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.824199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.824356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.824558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.824568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.824647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.824793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.824802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.825009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.825164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.825174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.825438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.825653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.825663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.825808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.825966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.825977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.826194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.826276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.826287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.826425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.826492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.826502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.826584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.826663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.826672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.826809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.826886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.826895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.885 [2024-07-12 17:42:51.827137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.827287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.885 [2024-07-12 17:42:51.827299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.885 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.827471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.827548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.827558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.827784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.827990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.828000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.828155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.828321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.828332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.828563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.828662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.828672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.828742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.828835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.828844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.828995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.829230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.829239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.829342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.829430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.829440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.829593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.829737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.829747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.829918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.830008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.830018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.830223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.830316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.830329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.830418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.830570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.830579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.830808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.830939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.830949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.831057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.831197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.831207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.831295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.831390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.831400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.831557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.831764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.831773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.831927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.832132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.832314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.832549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.832881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.832991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.833071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.833147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.833156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.833302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.833390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.833399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.833491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.833656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.833666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.833808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.834142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.834295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.834582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.834854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.834966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.835039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.835119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.835129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.835215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.835299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.835309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.886 qpair failed and we were unable to recover it. 00:32:12.886 [2024-07-12 17:42:51.835514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.886 [2024-07-12 17:42:51.835668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.887 [2024-07-12 17:42:51.835678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.887 qpair failed and we were unable to recover it. 00:32:12.887 [2024-07-12 17:42:51.835821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.887 [2024-07-12 17:42:51.835903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.887 [2024-07-12 17:42:51.835914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:12.887 qpair failed and we were unable to recover it. 00:32:12.887 [2024-07-12 17:42:51.836055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.836209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.836221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.836390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.836530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.836540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.836617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.836759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.836770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.836915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.837165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.837382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.837651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.837885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.837978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.838141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.838229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.838239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.838433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.838507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.838517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.164 qpair failed and we were unable to recover it. 00:32:13.164 [2024-07-12 17:42:51.838668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.838867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.164 [2024-07-12 17:42:51.838877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.839087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.839231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.839241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.839401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.839490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.839500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.839689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.839785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.839795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.839953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.840121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.840131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.840229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.840314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.840324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.840470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.840612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.840621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.840755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.840846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.840856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.841040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.841195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.841205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.841285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.841382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.841392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.841471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.841542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.841552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.841628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.841697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.841708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.841852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.842016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.842026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.842193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.842269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.842280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.842351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.842515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.842525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.842682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.842780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.842790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.842929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.843023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.843033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.843173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.843248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.843261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.843398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.843603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.843613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.843752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.843843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.843853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.844016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.844158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.844169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.844237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.844308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.844319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.844500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.844568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.844578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.844731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.844885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.844895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.844974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.845058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.845068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.845151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.845306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.845317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.845458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.845607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.845617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.845852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.845948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.845958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.846101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.846185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.846195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.846432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.846594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.846604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.846674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.846759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.846770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.846915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.847064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.847074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.847171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.847378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.847389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.165 qpair failed and we were unable to recover it. 00:32:13.165 [2024-07-12 17:42:51.847457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.165 [2024-07-12 17:42:51.847596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.847605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.847837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.847985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.847995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.848217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.848373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.848383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.848569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.848719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.848730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.848813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.848972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.848983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.849082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.849238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.849248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.849341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.849508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.849519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.849654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.849796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.849806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.849965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.850122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.850132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.850281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.850423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.850433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.850585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.850721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.850731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.850812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.850893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.850903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.850980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.851135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.851145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.851299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.851505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.851515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.851592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.851760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.851771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.851929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.852014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.852024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.852094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.852235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.852245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.852388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.852569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.852579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.852734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.852817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.852827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.852968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.853200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.853210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.853353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.853587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.853596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.853765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.853847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.853857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.853948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.854111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.854121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.854208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.854385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.854395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.854571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.854658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.854668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.854841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.855075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.855085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.166 [2024-07-12 17:42:51.855243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.855324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.166 [2024-07-12 17:42:51.855335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.166 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.855403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.855537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.855547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.855805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.855876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.855886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.856045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.856185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.856195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.856283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.856491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.856502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.856659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.856795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.856805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.856900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.857166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.857176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.857260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.857372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.857383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.857546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.857622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.857632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.857709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.857853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.857863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.858053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.858135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.858145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.858219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.858296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.858307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.858457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.858605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.858615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.858693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.858787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.858797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.858876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.859021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.859030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.859196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.859357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.859367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.859444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.859604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.859615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.859768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.859838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.859848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.859985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.860128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.860138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.860232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.860319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.860330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.860468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.860556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.860566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.860707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.860855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.860865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.861002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.861159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.861170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.861253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.861340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.861350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.861489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.861639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.861649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.861717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.861882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.861892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.862037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.862179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.862189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.167 qpair failed and we were unable to recover it. 00:32:13.167 [2024-07-12 17:42:51.862341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.167 [2024-07-12 17:42:51.862441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.862452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.862534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.862605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.862616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.862750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.862838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.862848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.862985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.863080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.863090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.863231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.863330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.863341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.863528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.863599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.863610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.863761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.863844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.863854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.863992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.864070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.864081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.864241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.864417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.864427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.864609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.864825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.864835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.864985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.865202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.865212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.865301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.865456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.865466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.865675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.865825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.865837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.865907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.866057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.866068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.866207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.866413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.866424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.866584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.866724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.866735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.866872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.867116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.867126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.867286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.867388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.867398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.867487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.867568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.867579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.867754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.867854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.867865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.868005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.868156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.868166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.868396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.868574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.868585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.868666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.868738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.868749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.868889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.868978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.868988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.869081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.869234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.869244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.869358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.869513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.869523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.869605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.869687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.168 [2024-07-12 17:42:51.869697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.168 qpair failed and we were unable to recover it. 00:32:13.168 [2024-07-12 17:42:51.869960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.870112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.870122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.870281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.870420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.870430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.870684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.870889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.870899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.870997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.871073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.871082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.871233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.871466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.871476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.871680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.871768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.871780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.871976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.872046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.872056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.872261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.872480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.872490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.872641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.872778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.872789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.872951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.873104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.873115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.873350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.873518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.873529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.873681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.873843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.873853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.874065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.874231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.874241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.874487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.874643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.874653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.874754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.874842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.874852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.874949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.875129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.875141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.875279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.875431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.875441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.875592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.875740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.875751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.875887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.875956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.875966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.169 qpair failed and we were unable to recover it. 00:32:13.169 [2024-07-12 17:42:51.876108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.169 [2024-07-12 17:42:51.876268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.876279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.876373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.876523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.876534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.876615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.876689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.876699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.876871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.876965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.876975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.877051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.877121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.877131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.877310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.877389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.877399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.877483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.877655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.877665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.877877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.877961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.877971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.878180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.878346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.878357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.878452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.878659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.878670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.878757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.878847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.878858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.878927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.879063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.879073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.879296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.879394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.879404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.879557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.879708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.879718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.879860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.880051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.880061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.880270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.880407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.880418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.880667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.880735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.880745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.880947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.881168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.881178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.881339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.881445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.881454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.881525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.881668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.881679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.881832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.881979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.881990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.882070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.882225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.882235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.882475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.882713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.882723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.882904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.883088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.883098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.883353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.883435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.883445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.883595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.883770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.883780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.883938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.884008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.170 [2024-07-12 17:42:51.884018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.170 qpair failed and we were unable to recover it. 00:32:13.170 [2024-07-12 17:42:51.884246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.884413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.884424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.884659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.884910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.884920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.884988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.885149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.885160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.885367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.885544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.885554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.885690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.885870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.885880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.885982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.886058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.886068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.886302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.886384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.886395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.886553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.886716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.886726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.886800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.886889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.886899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.887059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.887201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.887211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.887353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.887510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.887521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.887667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.887752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.887762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.887911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.888141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.888151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.888291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.888433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.888444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.888613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.888775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.888785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.888885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.888955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.888965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.889106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.889277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.889287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.889493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.889723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.889734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.889816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.889914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.889925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.890073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.890212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.890222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.890315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.890396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.890406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.890616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.890766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.890777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.890926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.891093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.891103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.891252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.891483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.891493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.891573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.891721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.891731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.891881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.891967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.891977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.171 qpair failed and we were unable to recover it. 00:32:13.171 [2024-07-12 17:42:51.892079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.171 [2024-07-12 17:42:51.892170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.892180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.892347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.892566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.892576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.892793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.892887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.892897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.893083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.893178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.893188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.893276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.893366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.893376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.893519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.893697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.893708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.893871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.894011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.894021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.894197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.894272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.894282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.894431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.894519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.894530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.894673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.894762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.894772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.894982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.895141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.895151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.895358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.895431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.895441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.895597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.895776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.895786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.896020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.896162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.896172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.896311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.896450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.896460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.896546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.896681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.896691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.896841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.897054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.897063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.897199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.897335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.897345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.897507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.897675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.897685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.897831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.897967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.897977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.898184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.898279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.898289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.898436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.898514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.898524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.898676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.898846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.898856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.898996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.899083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.899093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.899343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.899485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.899495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.899579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.899656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.899667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.172 qpair failed and we were unable to recover it. 00:32:13.172 [2024-07-12 17:42:51.899736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.899811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.172 [2024-07-12 17:42:51.899821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.899986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.900069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.900079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.900142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.900276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.900287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.900364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.900626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.900636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.900792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.900867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.900877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.901057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.901144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.901153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.901225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.901318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.901329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.901468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.901619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.901630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.901805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.901889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.901899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.901963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.902099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.902110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.902182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.902249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.902273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.902358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.902447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.902457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.902593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.902745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.902755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.902908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.903004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.903015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.903188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.903329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.903339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.903419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.903566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.903576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.903809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.903885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.903895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.904050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.904260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.904271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.904429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.904581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.904591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.904673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.904820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.904830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.904928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.905067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.905077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.905245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.905402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.905413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.905499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.905747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.905756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.905921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.906012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.906022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.906184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.906336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.906346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.906488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.906747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.906757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.173 [2024-07-12 17:42:51.906931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.907088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.173 [2024-07-12 17:42:51.907098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.173 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.907176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.907312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.907322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.907556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.907625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.907635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.907787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.907926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.907937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.908145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.908285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.908296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.908504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.908588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.908598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.908738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.908836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.908846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.908993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.909064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.909073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.909230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.909310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.909321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.909411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.909548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.909558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.909641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.909720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.909730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.909885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.910065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.910075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.910220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.910307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.910318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.910486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.910586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.910596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.910735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.910816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.910826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.910898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.911048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.911058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.911152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.911323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.911334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.911513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.911648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.911658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.911795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.912026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.912036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.912130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.912268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.912278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.912459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.912642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.912653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.174 [2024-07-12 17:42:51.912884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.913046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.174 [2024-07-12 17:42:51.913056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.174 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.913130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.913340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.913352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.913439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.913597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.913607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.913743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.913922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.913932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.914023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.914188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.914199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.914338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.914547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.914557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.914802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.914872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.914883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.915176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.915410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.915420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.915526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.915669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.915678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.915856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.916008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.916018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.916116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.916284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.916295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.916444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.916599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.916611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.916825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.916893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.916904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.917075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.917215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.917226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.917476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.917627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.917637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.917736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.917839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.917849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.917929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.918134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.918144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.918356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.918564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.918573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.918717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.918805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.918815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.918900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.918974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.918985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.919074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.919293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.919304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.919402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.919568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.919579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.919674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.919747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.919758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.919830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.919906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.919916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.920072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.920280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.920291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.920381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.920462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.920472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.920605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.920671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.175 [2024-07-12 17:42:51.920682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.175 qpair failed and we were unable to recover it. 00:32:13.175 [2024-07-12 17:42:51.920770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.920844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.920854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.920923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.921076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.921086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.921153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.921231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.921240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.921379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.921558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.921568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.921717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.921945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.921957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.922098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.922176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.922186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.922265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.922418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.922428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.922500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.922603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.922613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.922692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.922785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.922795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.922934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.923026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.923036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.923200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.923301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.923313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.923400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.923631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.923641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.923728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.923871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.923881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.924021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.924100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.924110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.924351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.924447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.924457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.924536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.924620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.924630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.924861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.924934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.924945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.925043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.925141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.925152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.925218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.925361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.925372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.925442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.925512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.925522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.925786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.925870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.925880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.926087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.926227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.926237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.926425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.926566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.926577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.926715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.926870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.926880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.927089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.927243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.927253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.176 qpair failed and we were unable to recover it. 00:32:13.176 [2024-07-12 17:42:51.927415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.176 [2024-07-12 17:42:51.927502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.927512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.927663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.927737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.927746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.927832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.927915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.927924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.928090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.928237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.928247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.928394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.928490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.928500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.928584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.928793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.928803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.928899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.928998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.929009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.929168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.929331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.929341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.929412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.929643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.929653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.929733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.929883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.929893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.929984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.930057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.930067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.930217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.930371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.930381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.930519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.930660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.930670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.930764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.930825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.930836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.930932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.931191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.931425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.931657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.931896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.931984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.932067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.932226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.932236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.932334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.932481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.932491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.932589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.932686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.932696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.932918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.933204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.933386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.933648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.177 qpair failed and we were unable to recover it. 00:32:13.177 [2024-07-12 17:42:51.933799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.177 [2024-07-12 17:42:51.933966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.934045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.934189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.934199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.934270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.934357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.934367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.934594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.934678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.934688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.934843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.934991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.935104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.935330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.935633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.935813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.935953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.936099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.936244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.936257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.936400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.936474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.936485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.936625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.936704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.936714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.936802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.936892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.936902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.937002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.937232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.937478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.937715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.937897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.937977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.938044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.938109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.938119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.938205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.938291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.938301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.938449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.938528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.938539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.938678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.938758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.938768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.938837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.939042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.939052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.939141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.939219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.939229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.939424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.939564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.939575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.939738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.939914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.939924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.940073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.940252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.940273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.178 qpair failed and we were unable to recover it. 00:32:13.178 [2024-07-12 17:42:51.940419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.178 [2024-07-12 17:42:51.940485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.940495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.940628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.940802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.940811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.940891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.941064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.941073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.941150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.941286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.941294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.941387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.941543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.941552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.941643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.941779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.941787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.941944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.942199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.942361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.942578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.942823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.942992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.943001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.943066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.943157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.943166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.943323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.943412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.943421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.943496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.943661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.943669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.943894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.944152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.944325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.944483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.944646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.944891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.944961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.945110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.945118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.945202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.945348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.945358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.945516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.945595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.179 [2024-07-12 17:42:51.945604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.179 qpair failed and we were unable to recover it. 00:32:13.179 [2024-07-12 17:42:51.945753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.945820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.945829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.945906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.946053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.946062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.946152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.946309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.946318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.946490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.946725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.946734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.946807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.946874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.946883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.947094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.947243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.947252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.947469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.947606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.947614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.947709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.947813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.947822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.947963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.948193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.948366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.948601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.948830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.948912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.948986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.949074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.949083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.949161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.949299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.949308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.949446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.949516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.949524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.949735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.949885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.949894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.950032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.950098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.950106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.950261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.950396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.950405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.950492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.950633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.950641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.950751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.950886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.950895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.951029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.951180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.951189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.951400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.951500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.951509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.951646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.951818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.951826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.951972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.952109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.952118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.952287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.952514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.952523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.180 qpair failed and we were unable to recover it. 00:32:13.180 [2024-07-12 17:42:51.952735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.952884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.180 [2024-07-12 17:42:51.952892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.952972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.953111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.953120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.953279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.953419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.953428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.953633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.953715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.953723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.953796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.953867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.953875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.954037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.954171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.954180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.954340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.954493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.954501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.954708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.954783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.954792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.954871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.954958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.954967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.955055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.955132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.955142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.955225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.955457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.955467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.955640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.955792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.955801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.955868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.956097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.956311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.956540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.956791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.956947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.957016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.957164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.957173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.957261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.957418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.957427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.957599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.957754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.957765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.957976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.958058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.958067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.958158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.958276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.958285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.958342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.958428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.958437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.958591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.958727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.958736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.958947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.959153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.959162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.959269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.959407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.959415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.959555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.959635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.959643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.181 qpair failed and we were unable to recover it. 00:32:13.181 [2024-07-12 17:42:51.959780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.181 [2024-07-12 17:42:51.959844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.959852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.959923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.959995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.960086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.960408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.960563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.960888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.960996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.961084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.961251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.961263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.961403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.961470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.961478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.961573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.961657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.961666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.961747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.961818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.961827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.961962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.962036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.962045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.962208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.962356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.962365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.962537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.962683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.962694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.962770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.962921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.962930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.963007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.963251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.963263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.963408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.963491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.963505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.963592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.963774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.963783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.963992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.964076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.964085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.964159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.964380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.964389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.964489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.964630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.964639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.964897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.965205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.965454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.965662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.965881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.965978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.966128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.966295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.966304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.966370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.966521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.182 [2024-07-12 17:42:51.966530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.182 qpair failed and we were unable to recover it. 00:32:13.182 [2024-07-12 17:42:51.966619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.966755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.966764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.966919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.967145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.967154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.967303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.967405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.967414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.967481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.967534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.967543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.967697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.967769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.967778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.967924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.968068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.968076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.968158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.968341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.968350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.968434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.968589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.968598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.968821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.968902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.968910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.968989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.969078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.969087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.969225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.969291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.969300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.969439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.969595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.969604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.969756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.969911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.969919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.970066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.970140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.970148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.970344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.970426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.970435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.970597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.970750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.970758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.970920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.971000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.971009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.971184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.971392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.971401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.971544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.971790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.971799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.971873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.971922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.971931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.972139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.972276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.972286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.972445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.972522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.972531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.972755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.972855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.972864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.972938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.973082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.973091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.973232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.973386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.973394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.973521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.973578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.973587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.183 [2024-07-12 17:42:51.973662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.973748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.183 [2024-07-12 17:42:51.973757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.183 qpair failed and we were unable to recover it. 00:32:13.184 [2024-07-12 17:42:51.973917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.974101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.974110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.184 qpair failed and we were unable to recover it. 00:32:13.184 [2024-07-12 17:42:51.974183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.974269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.974279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.184 qpair failed and we were unable to recover it. 00:32:13.184 [2024-07-12 17:42:51.974358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.974426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.974434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.184 qpair failed and we were unable to recover it. 00:32:13.184 [2024-07-12 17:42:51.974587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.974820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.974829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.184 qpair failed and we were unable to recover it. 00:32:13.184 [2024-07-12 17:42:51.974987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.975198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.975206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.184 qpair failed and we were unable to recover it. 00:32:13.184 [2024-07-12 17:42:51.975352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.975499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.975508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.184 qpair failed and we were unable to recover it. 00:32:13.184 [2024-07-12 17:42:51.975668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.975779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.184 [2024-07-12 17:42:51.975788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.975939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.976189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.976199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.976284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.976376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.976385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.976466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.976650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.976659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.976821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.977111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.977323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.977534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.977784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.977862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.978004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.978187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.978426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.978670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.978827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.978966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.979046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.979206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.979215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.979291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.979344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.979353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.979458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.979529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.979537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.979682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.979834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.979842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.980052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.980207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.980215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.980396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.980550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.980558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.980767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.980911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.980920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.980997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.981138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.981146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.981304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.981371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.981379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.981448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.981693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.981701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.981857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.982007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.982016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.185 qpair failed and we were unable to recover it. 00:32:13.185 [2024-07-12 17:42:51.982168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.185 [2024-07-12 17:42:51.982373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.982381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.982561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.982783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.982792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.982939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.983114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.983122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.983262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.983393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.983401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.983606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.983706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.983715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.983795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.983936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.983944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.984086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.984170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.984178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.984344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.984514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.984523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.984611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.984748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.984756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.984932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.985016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.985024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.985175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.985261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.985270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.985541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.985695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.985703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.985868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.985957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.985966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.986053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.986277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.986286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.986486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.986690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.986699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.986767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.986856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.986865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.987038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.987280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.987479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.987718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.987871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.987968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.988059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.988151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.988160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.988258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.988329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.988337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.988497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.988564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.988573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.988763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.988917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.988926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.989132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.989234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.989243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.989476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.989611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.989620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.989710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.989793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.989802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.989963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.990183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.990192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.990357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.990564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.990573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.990644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.990780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.990789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.990961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.991107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.991115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.991271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.991497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.991506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.991643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.991723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.991732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.186 [2024-07-12 17:42:51.991887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.992093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.186 [2024-07-12 17:42:51.992102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.186 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.992190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.992277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.992286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.992368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.992444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.992453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.992605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.992682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.992690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.992822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.992964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.992973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.993150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.993360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.993369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.993443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.993578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.993587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.993725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.993806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.993815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.993951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.994124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.994133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.994218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.994368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.994377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.994546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.994702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.994711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.994915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.994995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.995153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.995320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.995531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.995756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.995851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.995984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.996124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.996133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.996281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.996363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.996372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.996509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.996684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.996692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.996924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.997008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.997016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.997087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.997223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.997231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.997394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.997537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.997546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.997714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.997778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.997786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.997927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.998085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.998094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.998199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.998280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.998292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.998365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.998462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.998470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.998548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.998717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.998725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.998932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.999109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.999247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.999487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.999663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:51.999827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:51.999965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:52.000199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:52.000208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:52.000277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:52.000427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:52.000436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.187 qpair failed and we were unable to recover it. 00:32:13.187 [2024-07-12 17:42:52.000526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.187 [2024-07-12 17:42:52.000662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.000673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.000833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.000913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.000923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.001013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.001145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.001154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.001294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.001375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.001384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.001532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.001684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.001693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.001783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.001932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.001940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.002148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.002228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.002236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.002310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.002380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.002389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.002554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.002700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.002708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.002875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.002946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.002955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.003109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.003202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.003212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.003444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.003596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.003604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.003798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.003953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.003961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.004176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.004304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.004313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.004574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.004713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.004722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.004868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.005030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.005039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.005210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.005283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.005292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.005447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.005582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.005590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.005742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.005883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.005892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.006152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.006208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.006217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.006305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.006393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.006401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.006494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.006638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.006647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.006826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.006911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.006920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.007127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.007195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.007203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.007297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.007381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.007390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.007475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.007607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.007616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.007686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.007762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.007771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.007857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.008083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.008091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.008177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.008312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.008321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.008471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.008633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.008641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.008710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.008863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.008872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.009044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.009113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.009121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.009210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.009290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.009299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.009448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.009607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.009615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.188 qpair failed and we were unable to recover it. 00:32:13.188 [2024-07-12 17:42:52.009714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.009852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.188 [2024-07-12 17:42:52.009861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.009929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.010145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.010329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.010519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.010736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.010828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.010998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.011179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.011188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.011406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.011482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.011490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.011597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.011670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.011678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.011820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.011902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.011911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.011978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.012113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.012122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.012205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.012351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.012360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.012425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.012513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.012522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.012659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.012837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.012846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.012939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.013077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.013086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.013149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.013229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.013238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.013380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.013585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.013593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.013852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.013930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.013939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.014023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.014084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.014093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.014330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.014568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.014577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.014716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.014869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.014878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.014954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.015020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.015028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.015176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.015240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.015249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.015349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.015429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.015439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.015617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.015694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.015702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.015837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.016002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.016011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.189 qpair failed and we were unable to recover it. 00:32:13.189 [2024-07-12 17:42:52.016093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.189 [2024-07-12 17:42:52.016271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.016288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.016390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.016594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.016603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.016740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.016807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.016816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.016904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.017108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.017116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.017275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.017420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.017429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.017636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.017731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.017740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.017832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.017912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.017920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.018063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.018144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.018152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.018390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.018461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.018470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.018617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.018826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.018835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.018930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.019100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.019109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.019193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.019406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.019415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.019497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.019586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.019595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.019731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.019804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.019813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.019915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.020008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.020016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.020099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.020162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.020171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.020384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.020453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.020462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.020612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.020787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.020796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.020958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.021099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.021107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.021260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.021413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.021422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.021571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.021648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.021657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.021748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.021900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.021909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.022045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.022214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.022223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.022314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.022474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.022482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.022692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.022758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.022767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.022856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.022922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.022931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.023099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.023179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.023188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.023324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.023477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.023486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.023637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.023782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.023791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.023977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.024181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.024189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.024342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.024524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.024533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.024622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.024697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.024705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.024777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.024844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.024853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.024927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.025020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.025029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.025176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.025234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.025243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.025396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.025471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.025479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.025670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.025863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.025871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.026019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.026097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.026105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.190 qpair failed and we were unable to recover it. 00:32:13.190 [2024-07-12 17:42:52.026191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.026263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.190 [2024-07-12 17:42:52.026272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.026487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.026638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.026647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.026814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.026965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.026973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.027119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.027194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.027202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.027299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.027380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.027388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.027470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.027559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.027568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.027652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.027805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.027813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.027995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.028197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.028206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.028285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.028513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.028522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.028608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.028694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.028703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.028845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.028910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.028918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.029103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.029277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.029286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.029358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.029566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.029574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.029645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.029718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.029727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.029861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.030006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.030015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.030182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.030322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.030331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.030486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.030560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.030569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.030708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.030934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.030943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.031079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.031235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.031243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.031477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.031615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.031623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.031803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.032051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.032060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.032217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.032369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.032378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.032468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.032635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.032644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.032736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.032875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.032884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.032988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.033142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.033151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.033318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.033401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.033409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.033509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.033601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.191 [2024-07-12 17:42:52.033609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.191 qpair failed and we were unable to recover it. 00:32:13.191 [2024-07-12 17:42:52.033783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.033925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.033934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.034018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.034247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.034259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.034411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.034487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.034496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.034590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.034763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.034772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.034844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.034941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.034949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.035078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.035153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.035161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.035394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.035462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.035470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.035631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.035708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.035716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.035806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.035908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.035917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.036085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.036240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.036249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.036414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.036567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.036576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.036652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.036855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.036864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.037050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.037204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.037213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.037284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.037361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.037370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.037507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.037646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.037654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.037791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.037939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.037947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.038029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.038124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.038135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.038280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.038355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.038364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.038576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.038724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.038733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.038954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.039123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.039131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.039316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.039459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.039467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.039622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.039688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.039697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.039859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.039937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.039945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.040153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.040249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.040265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.040342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.040447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.040456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.040542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.040626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.040635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.040798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.040967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.040977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.041182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.041267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.041275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.041501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.041655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.041663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.041834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.041985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.041993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.192 [2024-07-12 17:42:52.042075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.042159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.192 [2024-07-12 17:42:52.042168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.192 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.042356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.042498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.042506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.042711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.042941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.042949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.043101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.043173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.043181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.043417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.043491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.043500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.043662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.043815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.043824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.043914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.043999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.044093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.044328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.044675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.044867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.044940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.045101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.045232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.045240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.045453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.045540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.045548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.045756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.045895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.045904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.045973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.046124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.046133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.046282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.046431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.046440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.046582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.046737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.046749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.046831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.046969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.046977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.047113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.047186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.047195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.047277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.047489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.047498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.047581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.047748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.047757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.047907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.048136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.048145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.048295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.048479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.048488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.048575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.048730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.048738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.048996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.049163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.049172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.049304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.049516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.049524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.049616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.049694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.049702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.049795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.049863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.049872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.050013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.050100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.050109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.050268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.050442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.050451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.050537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.050636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.050645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.050795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.050858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.050866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.050950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.051015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.051024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.193 qpair failed and we were unable to recover it. 00:32:13.193 [2024-07-12 17:42:52.051232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.193 [2024-07-12 17:42:52.051414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.051423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.051576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.051722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.051730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.051807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.051882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.051891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.051974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.052148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.052157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.052245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.052501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.052510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.052691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.052763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.052771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.052943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.053103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.053112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.053252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.053354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.053363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.053522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.053591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.053600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.053787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.053886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.053894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.053981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.054079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.054088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.054238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.054389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.054398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.054553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.054622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.054630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.054722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.054873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.054881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.055119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.055273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.055282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.055363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.055518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.055527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.055732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.055803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.055811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.055872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.056053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.056061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.056192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.056337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.056346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.056497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.056589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.056598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.056737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.056886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.056895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.057030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.057184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.057193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.057274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.057371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.057380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.057484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.057638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.057647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.057741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.057906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.057915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.058018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.058108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.058117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.058347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.058443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.058451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.058542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.058621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.058629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.058709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.058844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.058852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.059007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.059157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.059166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.194 qpair failed and we were unable to recover it. 00:32:13.194 [2024-07-12 17:42:52.059233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.194 [2024-07-12 17:42:52.059315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.059325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.059409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.059554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.059563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.059754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.059825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.059833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.059968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.060124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.060132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.060273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.060351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.060360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.060500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.060569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.060578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.060660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.060798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.060806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.060872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.061107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.061251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.061528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.061708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.061797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.061953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.062120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.062129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.062279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.062426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.062435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.062594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.062762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.062771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.062869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.063034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.063043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.063213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.063294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.063303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.063399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.063483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.063491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.063628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.063836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.063844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.063985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.064146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.064155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.064291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.064375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.064384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.064450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.064681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.064689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.064773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.064977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.064986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.065128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.065271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.065280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.065426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.065493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.065501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.065729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.065811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.065820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.066123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.066191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.066200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.066339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.066413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.066421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.066657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.066748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.066757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.066891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.066972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.066981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.067090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.067232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.067241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.067382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.067531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.067540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.067745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.067801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.067810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.067945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.068081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.068089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.068263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.068346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.068354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.068434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.068588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.068596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.068688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.068766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.068775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.068925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.069092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.069100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.195 [2024-07-12 17:42:52.069190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.069327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.195 [2024-07-12 17:42:52.069336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.195 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.069516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.069650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.069658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.069813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.070037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.070046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.070198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.070481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.070490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.070698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.070852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.070861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.070949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.071024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.071033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.071264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.071362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.071370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.071527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.071735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.071743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.071832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.071925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.071933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.072104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.072261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.072269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.072334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.072418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.072427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.072500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.072648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.072656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.072797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.072934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.072942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.073033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.073204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.073213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.073284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.073432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.073441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.073518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.073591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.073600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.073741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.073889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.073897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.073973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.074126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.074135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.074342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.074547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.074556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.074647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.074810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.074819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.074887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.075029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.075039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.075197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.075278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.075287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.075429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.075569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.075577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.075716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.075870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.075878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.075969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.076193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.076201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.076412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.076558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.076566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.076714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.076870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.076879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.077015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.077152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.077160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.077240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.077378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.077387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.077603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.077674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.196 [2024-07-12 17:42:52.077682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.196 qpair failed and we were unable to recover it. 00:32:13.196 [2024-07-12 17:42:52.077820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.078004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.078013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.078221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.078373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.078382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.078467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.078631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.078640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.078820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.078957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.078966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.079034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.079273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.079282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.079371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.079506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.079516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.079609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.079694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.079703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.079774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.079854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.079863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.080069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.080221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.080230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.080308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.080488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.080496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.080630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.080769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.080778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.080935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.081099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.081108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.081276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.081433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.081441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.081613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.081699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.081708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.081868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.081965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.081974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.082179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.082392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.082401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.082484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.082568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.082578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.082650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.082735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.082744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.082839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.082989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.082998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.083165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.083320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.083330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.083430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.083577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.083586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.083666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.083738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.083747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.083899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.084059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.084068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.084142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.084354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.084363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.084570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.084657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.084667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.084816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.084884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.084893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.084964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.085244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.085502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.085752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.197 [2024-07-12 17:42:52.085927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.085999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.197 [2024-07-12 17:42:52.086008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.197 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.086148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.086228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.086237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.086419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.086556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.086565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.086651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.086799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.086808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.086891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.086962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.086971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.087107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.087179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.087188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.087324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.087555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.087568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.087706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.087846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.087855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.088033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.088178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.088187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.088324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.088489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.088499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.088682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.088847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.088856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.088926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.089110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.089119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.089205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.089343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.089352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.089443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.089530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.089539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.089687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.089773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.089782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.089921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.090017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.090026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.090117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.090293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.090304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.090371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.090581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.090590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.090675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.090829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.090838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.090928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.091286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.091459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.091639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.091884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.091972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.092120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.092273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.092281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.092424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.092496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.092505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.092567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.092717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.092726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.092866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.092935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.092944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.093180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.093260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.093269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.093431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.093563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.093572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.093822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.093900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.093909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.094000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.094144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.094153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.094310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.094451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.094460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.094536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.094614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.094623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.094716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.094924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.094933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.198 qpair failed and we were unable to recover it. 00:32:13.198 [2024-07-12 17:42:52.095152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.198 [2024-07-12 17:42:52.095363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.095373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.095551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.095699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.095708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.095892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.096030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.096038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.096116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.096282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.096291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.096433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.096566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.096576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.096716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.096873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.096881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.097027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.097258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.097267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.097406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.097668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.097678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.097822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.097891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.097900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.098055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.098286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.098296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.098504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.098710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.098720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.098798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.098896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.098906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.099055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.099192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.099201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.099355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.099490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.099499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.099587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.099679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.099688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.099774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.099862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.099871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.100017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.100084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.100093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.100293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.100443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.100451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.100601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.100688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.100697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.100908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.101057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.101066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.101219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.101354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.101364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.101505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.101670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.101679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.101751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.101916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.101925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.102082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.102314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.102324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.102579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.102731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.102739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.102955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.103170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.103179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.103273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.103419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.199 [2024-07-12 17:42:52.103428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.199 qpair failed and we were unable to recover it. 00:32:13.199 [2024-07-12 17:42:52.103653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.103722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.103731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.103979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.104142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.104151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.104306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.104389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.104399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.104548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.104701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.104710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.104789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.104870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.104879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.104980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.105159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.105168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.105306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.105442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.105451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.105684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.105761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.105770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.105863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.106000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.106009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.106156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.106393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.106402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.106563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.106783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.106792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.106869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.107039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.107048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.107270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.107412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.107421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.107519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.107593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.107602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.107698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.107846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.107855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.107942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.108087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.108096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.108235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.108314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.108323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.108531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.108619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.108628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.108722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.108875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.108884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.109116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.109257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.109266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.109342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.109597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.109606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.109703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.109875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.109884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.110138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.110210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.110219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.110375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.110475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.110484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.110556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.110705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.110714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.110891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.111048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.111057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.200 qpair failed and we were unable to recover it. 00:32:13.200 [2024-07-12 17:42:52.111133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.111227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.200 [2024-07-12 17:42:52.111236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.201 qpair failed and we were unable to recover it. 00:32:13.201 [2024-07-12 17:42:52.111324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.111394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.111403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.201 qpair failed and we were unable to recover it. 00:32:13.201 [2024-07-12 17:42:52.111581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.111870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.111879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.201 qpair failed and we were unable to recover it. 00:32:13.201 [2024-07-12 17:42:52.111969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.112117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.112126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.201 qpair failed and we were unable to recover it. 00:32:13.201 [2024-07-12 17:42:52.112344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.112425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.112434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.201 qpair failed and we were unable to recover it. 00:32:13.201 [2024-07-12 17:42:52.112589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.112681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.112690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.201 qpair failed and we were unable to recover it. 00:32:13.201 [2024-07-12 17:42:52.112855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.113008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.113017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.201 qpair failed and we were unable to recover it. 00:32:13.201 [2024-07-12 17:42:52.113147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.113216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.201 [2024-07-12 17:42:52.113225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.201 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.113412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.113504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.113515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.113573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.113765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.113775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.113877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.113954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.113962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.114037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.114262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.114272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.114454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.114612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.114621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.114827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.114912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.114920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.115069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.115167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.115176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.115352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.115435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.115444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.115605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.115671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.115680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.115823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.116184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.116486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.116667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.116900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.116984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.117193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.117283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.117292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.117446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.117525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.117534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.117612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.117700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.117709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.117859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.118030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.118039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.118177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.118266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.118276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.118416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.118508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.118517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.118676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.118811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.118820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.118920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.119079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.119088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.119189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.119337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.119346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.119551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.119619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.119628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.119776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.119873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.119882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.119960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.120110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.120119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.120361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.120427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.120436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.120525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.120599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.120608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.120747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.120983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.120991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.121213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.121353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.121361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.121520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.121621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.121629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.121708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.121794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.121803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.121874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.122017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.122026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.122122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.122190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.122199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.122344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.122481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.122490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.122571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.122733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.122742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.122893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.123223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.123387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.123679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.474 qpair failed and we were unable to recover it. 00:32:13.474 [2024-07-12 17:42:52.123840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.474 [2024-07-12 17:42:52.123979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.124064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.124209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.124219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.124376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.124585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.124594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.124685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.124764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.124772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.124920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.125002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.125011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.125165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.125344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.125353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.125492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.125656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.125665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.125736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.125822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.125831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.125996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.126073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.126081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.126230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.126324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.126333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.126489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.126561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.126570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.126719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.126884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.126895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.126973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.127107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.127116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.127205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.127341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.127350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.127438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.127596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.127605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.127679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.127777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.127785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.127887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.128033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.128042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.128194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.128376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.128385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.128476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.128630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.128640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.128776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.128957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.128966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.129146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.129318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.129328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.129413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.129475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.129488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.129564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.129711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.129720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.129887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.130040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.130050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.130297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.130527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.130537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.130733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.130888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.130896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.130958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.131054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.131063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.131215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.131300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.131309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.131402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.131536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.131545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.131679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.131780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.131788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.132021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.132117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.132126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.132281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.132462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.132473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.132667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.132739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.132748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.132959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.133163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.133172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.133312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.133550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.133559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.133707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.133860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.133869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.133951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.134159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.134168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.134306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.134402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.134410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.134560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.134703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.134711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.134864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.134950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.134959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.135141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.135318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.135326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.135562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.135702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.135711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.135953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.136044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.136053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.136216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.136352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.136361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.136467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.136549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.136558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.136774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.136977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.136986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.137246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.137345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.137354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.137504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.137676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.137685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.137925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.138071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.138080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.138153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.138237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.138246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.138321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.138411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.475 [2024-07-12 17:42:52.138420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.475 qpair failed and we were unable to recover it. 00:32:13.475 [2024-07-12 17:42:52.138503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.138576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.138585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.138792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.138929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.138937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.139006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.139143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.139152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.139333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.139468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.139477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.139641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.139722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.139731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.139905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.139983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.139993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.140084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.140228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.140236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.140451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.140547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.140555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.140763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.140981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.140990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.141200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.141282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.141291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.141428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.141568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.141577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.141642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.141709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.141718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.141871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.142052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.142061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.142268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.142364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.142373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.142548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.142684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.142693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.142874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.143082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.143407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.143581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.143795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.143979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.144129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.144272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.144281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.144365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.144593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.144601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.144751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.144917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.144925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.144995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.145060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.145069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.145166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.145321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.145330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.145538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.145623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.145632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.145792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.145937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.145946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.146027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.146111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.146120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.146218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.146418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.146427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.146532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.146612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.146622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.146772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.146911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.146919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.147062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.147229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.147238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.147381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.147522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.147531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.147700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.147787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.147795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.147874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.148029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.148038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.148177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.148342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.148351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.148505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.148748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.148757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.148844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.149103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.149112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.149272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.149426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.149435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.149532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.149606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.476 [2024-07-12 17:42:52.149615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.476 qpair failed and we were unable to recover it. 00:32:13.476 [2024-07-12 17:42:52.149685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.149862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.149871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.150019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.150169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.150178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.150331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.150534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.150543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.150633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.150714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.150723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.150814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.150965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.150974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.151205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.151384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.151393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.151547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.151703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.151711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.151967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.152170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.152179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.152248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.152398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.152408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.152548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.152719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.152728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.152808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.152941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.152950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.153158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.153250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.153262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.153397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.153479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.153488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.153578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.153657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.153666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.153818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.153919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.153928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.153998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.154087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.154097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.154305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.154448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.154458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.154594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.154751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.154760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.154831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.154968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.154977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.155065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.155131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.155140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.155290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.155463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.155472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.155642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.155778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.155787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.155887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.156051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.156060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.156144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.156280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.156289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.156358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.156504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.156513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.156667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.156747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.156756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.156899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.157053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.157061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.157158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.157238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.157247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.157423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.157589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.157599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.157671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.157902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.157911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.158066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.158155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.158164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.158343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.158485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.158494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.158630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.158777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.158786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.159037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.159179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.159188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.159356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.159500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.159509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.159572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.159749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.159758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.160020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.160202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.160486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.160653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.160789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.160998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.161099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.161230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.161238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.161408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.161542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.161551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.161645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.161911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.161919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.162049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.162209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.162217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.162315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.162450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.162459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.477 qpair failed and we were unable to recover it. 00:32:13.477 [2024-07-12 17:42:52.162608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.477 [2024-07-12 17:42:52.162742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.162751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.162909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.163072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.163080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.163301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.163367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.163376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.163525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.163620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.163628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.163796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.163980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.163988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.164071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.164214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.164223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.164360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.164434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.164442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.164708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.164941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.164950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.165086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.165307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.165315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.165473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.165681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.165690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.165829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.165996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.166150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.166323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.166622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.166819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.166910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.166999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.167149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.167157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.167261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.167400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.167409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.167648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.167784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.167793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.167931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.168076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.168261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.168482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.168791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.168944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.169016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.169162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.169170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.169320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.169524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.169533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.169618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.169769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.169780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.169858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.170098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.170107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.170194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.170347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.170356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.170439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.170517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.170526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.170682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.170768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.170777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.170862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.171055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.171064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.171274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.171429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.171437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.171589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.171764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.171773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.171910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.171989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.171997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.172145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.172384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.172392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.172542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.172696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.172708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.172867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.172955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.172963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.173109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.173262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.173271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.173403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.173552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.173560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.173633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.173716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.173725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.173814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.173911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.173919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.174073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.174226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.174235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.174319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.174397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.174405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.174585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.174668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.174677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.174753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.174824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.174833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.478 qpair failed and we were unable to recover it. 00:32:13.478 [2024-07-12 17:42:52.174983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.478 [2024-07-12 17:42:52.175137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.175148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.175316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.175544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.175552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.175779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.175876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.175885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.176022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.176170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.176179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.176262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.176339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.176347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.176578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.176711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.176719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.176810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.176936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.176944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.177179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.177336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.177345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.177534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.177682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.177691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.177973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.178046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.178054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.178191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.178437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.178448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.178530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.178665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.178674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.178769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.178914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.178922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.179026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.179124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.179134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.179273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.179424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.179433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.179595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.179697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.179705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.179797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.179882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.179890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.180032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.180118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.180126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.180278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.180528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.180536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.180692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.180761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.180769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.180846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.180984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.180993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.181078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.181210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.181218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.181399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.181486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.181495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.181581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.181732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.181741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.181827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.181909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.181917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.182149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.182301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.182310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.182518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.182760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.182768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.182870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.183084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.183092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.183356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.183547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.183556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.183644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.183789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.183798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.183873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.184044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.184053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.184159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.184289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.184299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.184466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.184622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.184631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.184837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.184912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.184920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.185157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.185257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.185266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.185460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.185530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.185539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.185677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.185842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.185850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.186094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.186179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.186188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.186269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.186347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.186356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.479 qpair failed and we were unable to recover it. 00:32:13.479 [2024-07-12 17:42:52.186527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.186767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.479 [2024-07-12 17:42:52.186775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.186931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.187084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.187093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.187250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.187346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.187355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.187427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.187633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.187642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.187816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.187907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.187915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.188126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.188294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.188303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.188537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.188686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.188694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.188797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.188933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.188942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.189013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.189128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.189136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.189290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.189383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.189392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.189544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.189622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.189631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.189783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.189912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.189920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.190017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.190097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.190106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.190193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.190284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.190293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.190443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.190524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.190532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.190611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.190768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.190777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.190856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.191102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.191110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.191196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.191274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.191283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.191450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.191593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.191602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.191685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.191853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.191862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.192033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.192183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.192191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.192272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 17:42:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:13.480 [2024-07-12 17:42:52.192444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.192453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 17:42:52 -- common/autotest_common.sh@852 -- # return 0 00:32:13.480 [2024-07-12 17:42:52.192601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.192744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.192753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 17:42:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:13.480 [2024-07-12 17:42:52.192975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.193113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.193121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 17:42:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:13.480 [2024-07-12 17:42:52.193221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.193292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.193301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.193381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:32:13.480 [2024-07-12 17:42:52.193512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.193521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.193669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.193763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.193772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.193928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.194169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.194355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.194585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.194799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.194975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.195121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.195187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.195195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.195290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.195386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.195394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.195476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.195646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.195655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.195849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.196003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.196011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.196086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.196173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.196182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.196391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.196599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.196607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.196714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.196795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.196803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.196959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.197035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.197044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.197185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.197283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.197292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.197367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.197508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.197518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.197663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.197895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.197905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.197992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.198091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.198100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.198200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.198337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.198346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.480 qpair failed and we were unable to recover it. 00:32:13.480 [2024-07-12 17:42:52.198429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.480 [2024-07-12 17:42:52.198587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.198596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.198686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.198806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.198815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.198955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.199043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.199052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.199191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.199338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.199347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.199600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.199683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.199692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.199779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.199862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.199873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.199954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.200213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.200225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.200302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.200458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.200467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.200624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.200780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.200789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.200887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.200971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.200979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.201058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.201233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.201242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.201379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.201479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.201487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.201591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.201661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.201670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.201759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.201849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.201857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.201994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.202068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.202077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.202234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.202335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.202345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.202510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.202587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.202598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.202738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.203034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.203043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.203141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.203215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.203224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.203394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.203469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.203478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.203636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.203820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.203829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.203913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.204088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.204098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.204237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.204395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.204406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.204584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.204660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.204669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.204757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.204842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.204852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.205000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.205180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.205369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.205546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.205710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.205873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.205990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.206202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.206331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.206340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.206495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.206564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.206573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.206660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.206750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.206759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.206926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.206994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.207004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.207101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.207191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.207199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.207432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.207574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.207585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.207664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.207760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.207769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.207868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.208149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.208373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.208561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.208751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.208914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.208999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.209079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.209156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.209165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.209414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.209498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.209507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.209577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.209682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.209691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.209800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.209871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.209880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.209961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.210035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.210043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.210133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.210207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.210216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.210376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.210455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.210464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.210766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.210972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.481 [2024-07-12 17:42:52.210981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.481 qpair failed and we were unable to recover it. 00:32:13.481 [2024-07-12 17:42:52.211237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.211330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.211339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.211438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.211522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.211532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.211607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.211743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.211752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.211842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.211988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.211996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.212085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.212197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.212205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.212290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.212367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.212375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.212527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.212626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.212636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.212775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.212916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.212924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.213209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.213285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.213294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.213461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.213530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.213539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.213612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.213692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.213701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.213783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.213859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.213867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.213959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.214130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.214289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.214460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.214681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.214776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.214939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.215086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.215095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.215248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.215423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.215432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.215507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.215579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.215588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.215679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.215825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.215834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.215921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.216065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.216076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.216156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.216237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.216246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.216450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.216639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.216658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.216823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.216913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.216928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.217108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.217191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.217206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.217298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.217385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.217399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.217530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.217606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.217621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.217773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.217864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.217878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.217959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.218138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f094c000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.218304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.218527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.218753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.218842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.218911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.219148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.219336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.219494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.219647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.219927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.219994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.220099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.220260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.220495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.220676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.220822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.220899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.220987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.221075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.221085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.221162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.221229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.221237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.482 qpair failed and we were unable to recover it. 00:32:13.482 [2024-07-12 17:42:52.221315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.482 [2024-07-12 17:42:52.221405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.221414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.221484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.221552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.221560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.221703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.221784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.221792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.221878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.221940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.221949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.222018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.222159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.222343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.222513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.222682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.222854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.222931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.223138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.223217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.223225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.223307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.223443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.223452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.223520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.223670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.223680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.223766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.223844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.223853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.223991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.224157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.224166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.224236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.224400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.224409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.224546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.224634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.224643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.224726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.224810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.224819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.224959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.225183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.225339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.225564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.225777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.225861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.226012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.226250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.226405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.226595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.226852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.226943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.227036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 17:42:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.483 [2024-07-12 17:42:52.227117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.227191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.227350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 17:42:52 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:13.483 [2024-07-12 17:42:52.227513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.483 [2024-07-12 17:42:52.227841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.227937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.228030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:32:13.483 [2024-07-12 17:42:52.228100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.228110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.228264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.228331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.228341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.228415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.228558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.228567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.228734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.228810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.228818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.228889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.229095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.229118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.229263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.229418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.229427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.229499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.229583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.229591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.229678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.229856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.229865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.229943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.230089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.230426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.230593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.230772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.230930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.231010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.231194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.231507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.231658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.483 qpair failed and we were unable to recover it. 00:32:13.483 [2024-07-12 17:42:52.231896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.483 [2024-07-12 17:42:52.231959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.231968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.232137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.232229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.232238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.232389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.232457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.232465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.232613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.232753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.232761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.232830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.232896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.232906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.232998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.233234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.233392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.233552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.233793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.233883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.234018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.234244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.234414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.234564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.234733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.234818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.234988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.235223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.235472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.235650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.235824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.235918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.236082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.236258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.236507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.236651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.236803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.236963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.237113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.237202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.237211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.237352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.237419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.237427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.237502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.237585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.237594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.237825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.237963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.237971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.238048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.238182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.238191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.238332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.238537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.238546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.238628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.238763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.238772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.238980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.239070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.239079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.239147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.239213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.239221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.239367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.239506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.239514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.239603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.239737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.239745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.239848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.240084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.240266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.240523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.240767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.240863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.240949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.241098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.241386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.241680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.241903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.241981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.242147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.242284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.242293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.242387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.242457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.242466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.242536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.242672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.242680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.484 [2024-07-12 17:42:52.242744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.242876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.484 [2024-07-12 17:42:52.242884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.484 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.243022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.243089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.243098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.243184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.243392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.243400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.243483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.243560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.243569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.243725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.243799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.243808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.244030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.244098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.244107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.244242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.244329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.244338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.244475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.244624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.244633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.244718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.244815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.244823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.244978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.245056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.245064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.245208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.245351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.245360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.245498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.245561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.245569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.245708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.245789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.245798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.245883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.246100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.246340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.246635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.246854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.246994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.247071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.247396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.247646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.247822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.247892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.247980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.248052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.248061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.248152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.248357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.248367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.248541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.248620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.248629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.248766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.248851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.248860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.249036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.249171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.249180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.249406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.249472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.249481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.249575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.249710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.249719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.249908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.250014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.250022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.250177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.250312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.250321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.250461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.250616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.250624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.250717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.250954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.250963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.251099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.251232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.251241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.251381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.251523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.251531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.251761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.251846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.251855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.251943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.252115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.252123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.252213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.252388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.252397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.252554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.252761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.252769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.252925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.253009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.253018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.253106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.253311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.253320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.253577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.253836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.253845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.254016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.254082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.254090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.254271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.254347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.254356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.254441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.254591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.254599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.254670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.254765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.254773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.485 qpair failed and we were unable to recover it. 00:32:13.485 [2024-07-12 17:42:52.254854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.485 [2024-07-12 17:42:52.255024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.255033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.255168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.255253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.255265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.255341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.255425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.255434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.255510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 Malloc0 00:32:13.486 [2024-07-12 17:42:52.255654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.255663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.255815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.255899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.255908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.256061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.256219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.256227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.256303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.486 [2024-07-12 17:42:52.256399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.256407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.256617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 17:42:52 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:13.486 [2024-07-12 17:42:52.256893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.256902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.256988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.486 [2024-07-12 17:42:52.257204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.257214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:32:13.486 [2024-07-12 17:42:52.257353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.257432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.257441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.257610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.257694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.257703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.257859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.257994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.258002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.258096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.258185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.258196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.258405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.258540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.258548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.258697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.258914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.258923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.259078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.259246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.259270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.259352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.259568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.259577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.259656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.259860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.259868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.259955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.260120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.260129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.260280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.260421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.260430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.260581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.260802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.260811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.260957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.261095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.261103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.261273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.261441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.261452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.261533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.261758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.261766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.261957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.262086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.262095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.262276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.262366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.262374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.262465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.262542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.262551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.262617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.262755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.262764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.262917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.263052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.263061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.263137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.263228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.263237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.263369] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.486 [2024-07-12 17:42:52.263406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.263545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.263554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.263642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.263721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.263730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.263791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.264008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.264019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.264163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.264310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.264319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.264403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.264547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.264555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.264645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.264848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.264857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.265092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.265175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.265184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.265263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.265414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.265422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.265509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.265588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.265597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.265806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.265941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.265949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.266104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.266205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.266214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.266430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.266604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.266613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.266770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.266909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.266919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.267069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.267135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.267144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.267228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.267363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.267372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.267459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.267620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.267627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.267777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.267959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.267967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.268064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.268143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.268151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.268304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.268456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.268465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.486 qpair failed and we were unable to recover it. 00:32:13.486 [2024-07-12 17:42:52.268694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.486 [2024-07-12 17:42:52.268874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.268882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.269035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.269188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.269196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.269298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.269449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.269458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.269548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.269623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.269633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.269852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.269989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.269997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.270166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.270320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.270329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.270465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.270611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.270619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.270795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.270899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.270907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.270985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.271133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.271141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.271214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.271422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.271430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.271609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.271790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.271798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.271890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.272029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.272038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.272139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.272289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.272299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.487 [2024-07-12 17:42:52.272511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.272599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.272609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 17:42:52 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:13.487 [2024-07-12 17:42:52.272754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.272902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.272910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.487 [2024-07-12 17:42:52.272985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.273192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.273201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:32:13.487 [2024-07-12 17:42:52.273287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.273525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.273533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.273625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.273718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.273726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.273812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.273974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.273983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.274049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.274127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.274135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.274215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.274362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.274370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.274440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.274570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.274578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.274732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.274936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.274944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.275096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.275244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.275252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.275393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.275546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.275555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.275739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.275832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.275841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.275995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.276166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.276174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.276315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.276548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.276556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.276715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.276860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.276869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.277047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.277292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.277300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.277472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.277677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.277686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.277835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.277988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.277996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.278143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.278414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.278423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.278631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.278777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.278786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.278859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.279021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.279029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.279128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.279206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.279214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.279364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.279567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.279575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.279794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.279948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.279957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.280100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.280246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.280259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.487 [2024-07-12 17:42:52.280361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.280613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.280622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 17:42:52 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:13.487 [2024-07-12 17:42:52.280769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.280937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.280946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.487 [2024-07-12 17:42:52.281048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.281125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.281133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:32:13.487 [2024-07-12 17:42:52.281287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.281456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.487 [2024-07-12 17:42:52.281464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.487 qpair failed and we were unable to recover it. 00:32:13.487 [2024-07-12 17:42:52.281618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.281822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.281830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.282006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.282162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.282171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.282271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.282352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.282360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.282439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.282580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.282589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.282661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.282809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.282817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.282908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.283043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.283052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.283129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.283308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.283317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.283523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.283614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.283622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.283706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.283786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.283794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.284023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.284103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.284113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.284326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.284472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.284481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.284623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.284775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.284783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.284859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.284918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.284926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.285003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.285098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.285106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.285322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.285405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.285414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.285569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.285805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.285813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.285905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.286089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.286098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.286234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.286299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.286308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.286380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.286467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.286475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.286700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.286932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.286942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.287036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.287114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.287122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.287269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.287420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.287429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.287585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.287799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.287807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.287966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.288190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.288198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.288315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.488 [2024-07-12 17:42:52.288407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.288416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.288506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 17:42:52 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.488 [2024-07-12 17:42:52.288736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.288745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.288824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.288991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.289000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.488 [2024-07-12 17:42:52.289230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:32:13.488 [2024-07-12 17:42:52.289318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.289327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.289464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.289532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.289540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.289682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.289763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.289771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.289857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.290109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.290117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.290184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.290269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.290277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.290348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.290507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.290515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.290724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.290864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.290872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.290944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.291122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.291130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.291336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.291495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.291503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.291652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.291891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.488 [2024-07-12 17:42:52.291899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0944000b90 with addr=10.0.0.2, port=4420 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.292099] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.488 [2024-07-12 17:42:52.294027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.488 [2024-07-12 17:42:52.294121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.488 [2024-07-12 17:42:52.294141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.488 [2024-07-12 17:42:52.294148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.488 [2024-07-12 17:42:52.294157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.488 [2024-07-12 17:42:52.294175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.488 17:42:52 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.488 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.488 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:32:13.488 [2024-07-12 17:42:52.304002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.488 [2024-07-12 17:42:52.304098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.488 [2024-07-12 17:42:52.304116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.488 [2024-07-12 17:42:52.304123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.488 [2024-07-12 17:42:52.304128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.488 [2024-07-12 17:42:52.304142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.488 17:42:52 -- host/target_disconnect.sh@58 -- # wait 125020 00:32:13.488 [2024-07-12 17:42:52.314038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.488 [2024-07-12 17:42:52.314128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.488 [2024-07-12 17:42:52.314143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.488 [2024-07-12 17:42:52.314149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.488 [2024-07-12 17:42:52.314155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.488 [2024-07-12 17:42:52.314169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.324226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.488 [2024-07-12 17:42:52.324361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.488 [2024-07-12 17:42:52.324377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.488 [2024-07-12 17:42:52.324383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.488 [2024-07-12 17:42:52.324388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.488 [2024-07-12 17:42:52.324403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.488 qpair failed and we were unable to recover it. 00:32:13.488 [2024-07-12 17:42:52.334020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.488 [2024-07-12 17:42:52.334110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.488 [2024-07-12 17:42:52.334124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.334130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.334138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.334151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.344043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.344125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.344140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.344146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.344152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.344166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.354075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.354161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.354176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.354182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.354188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.354201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.364294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.364416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.364431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.364436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.364442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.364455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.374138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.374244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.374263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.374269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.374274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.374287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.384199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.384294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.384310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.384315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.384321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.384334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.394221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.394309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.394324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.394330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.394336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.394350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.404440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.404550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.404565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.404571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.404576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.404590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.414311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.414401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.414415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.414421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.414427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.414440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.489 [2024-07-12 17:42:52.424329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.489 [2024-07-12 17:42:52.424412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.489 [2024-07-12 17:42:52.424426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.489 [2024-07-12 17:42:52.424432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.489 [2024-07-12 17:42:52.424441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.489 [2024-07-12 17:42:52.424454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.489 qpair failed and we were unable to recover it. 00:32:13.749 [2024-07-12 17:42:52.434509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.749 [2024-07-12 17:42:52.434654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.749 [2024-07-12 17:42:52.434669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.749 [2024-07-12 17:42:52.434676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.749 [2024-07-12 17:42:52.434682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.749 [2024-07-12 17:42:52.434695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.749 qpair failed and we were unable to recover it. 00:32:13.749 [2024-07-12 17:42:52.444577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.749 [2024-07-12 17:42:52.444693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.749 [2024-07-12 17:42:52.444708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.749 [2024-07-12 17:42:52.444714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.749 [2024-07-12 17:42:52.444719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.749 [2024-07-12 17:42:52.444734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.749 qpair failed and we were unable to recover it. 00:32:13.749 [2024-07-12 17:42:52.454456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.749 [2024-07-12 17:42:52.454541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.749 [2024-07-12 17:42:52.454555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.749 [2024-07-12 17:42:52.454561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.749 [2024-07-12 17:42:52.454566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.749 [2024-07-12 17:42:52.454581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.749 qpair failed and we were unable to recover it. 00:32:13.749 [2024-07-12 17:42:52.464419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.749 [2024-07-12 17:42:52.464495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.749 [2024-07-12 17:42:52.464509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.749 [2024-07-12 17:42:52.464515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.749 [2024-07-12 17:42:52.464520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.749 [2024-07-12 17:42:52.464533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.749 qpair failed and we were unable to recover it. 00:32:13.749 [2024-07-12 17:42:52.474462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.749 [2024-07-12 17:42:52.474553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.749 [2024-07-12 17:42:52.474570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.749 [2024-07-12 17:42:52.474576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.749 [2024-07-12 17:42:52.474582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.749 [2024-07-12 17:42:52.474595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.749 qpair failed and we were unable to recover it. 00:32:13.749 [2024-07-12 17:42:52.484699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.749 [2024-07-12 17:42:52.484802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.749 [2024-07-12 17:42:52.484817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.749 [2024-07-12 17:42:52.484823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.749 [2024-07-12 17:42:52.484829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.749 [2024-07-12 17:42:52.484842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.749 qpair failed and we were unable to recover it. 00:32:13.749 [2024-07-12 17:42:52.494478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.749 [2024-07-12 17:42:52.494555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.749 [2024-07-12 17:42:52.494569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.749 [2024-07-12 17:42:52.494575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.749 [2024-07-12 17:42:52.494580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.749 [2024-07-12 17:42:52.494593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.749 qpair failed and we were unable to recover it. 00:32:13.749 [2024-07-12 17:42:52.504618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.504708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.504722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.504728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.504734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.504748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.514601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.514693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.514706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.514715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.514720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.514733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.524727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.524831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.524846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.524851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.524857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.524869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.534606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.534697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.534715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.534721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.534726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.534740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.544667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.544746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.544760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.544765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.544771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.544783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.554730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.554812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.554825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.554831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.554836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.554849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.564887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.564995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.565011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.565017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.565022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.565036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.574743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.574864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.574878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.574885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.574890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.574903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.584838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.584910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.584924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.584929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.584934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.584947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.594861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.594935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.594948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.594954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.594959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.594972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.605065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.605170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.605184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.605194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.605199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.605211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.614895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.614974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.614988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.614994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.614999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.615013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.624983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.625074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.625089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.625095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.625100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.625113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.635092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.635178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.635192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.750 [2024-07-12 17:42:52.635198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.750 [2024-07-12 17:42:52.635203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.750 [2024-07-12 17:42:52.635216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.750 qpair failed and we were unable to recover it. 00:32:13.750 [2024-07-12 17:42:52.645191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.750 [2024-07-12 17:42:52.645317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.750 [2024-07-12 17:42:52.645331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.751 [2024-07-12 17:42:52.645337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.751 [2024-07-12 17:42:52.645343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.751 [2024-07-12 17:42:52.645356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.751 qpair failed and we were unable to recover it. 00:32:13.751 [2024-07-12 17:42:52.655016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.751 [2024-07-12 17:42:52.655095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.751 [2024-07-12 17:42:52.655110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.751 [2024-07-12 17:42:52.655115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.751 [2024-07-12 17:42:52.655120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.751 [2024-07-12 17:42:52.655133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.751 qpair failed and we were unable to recover it. 00:32:13.751 [2024-07-12 17:42:52.665008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.751 [2024-07-12 17:42:52.665090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.751 [2024-07-12 17:42:52.665104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.751 [2024-07-12 17:42:52.665110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.751 [2024-07-12 17:42:52.665115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.751 [2024-07-12 17:42:52.665129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.751 qpair failed and we were unable to recover it. 00:32:13.751 [2024-07-12 17:42:52.675024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.751 [2024-07-12 17:42:52.675101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.751 [2024-07-12 17:42:52.675115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.751 [2024-07-12 17:42:52.675121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.751 [2024-07-12 17:42:52.675126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.751 [2024-07-12 17:42:52.675139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.751 qpair failed and we were unable to recover it. 00:32:13.751 [2024-07-12 17:42:52.685307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.751 [2024-07-12 17:42:52.685413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.751 [2024-07-12 17:42:52.685428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.751 [2024-07-12 17:42:52.685434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.751 [2024-07-12 17:42:52.685439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.751 [2024-07-12 17:42:52.685452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.751 qpair failed and we were unable to recover it. 00:32:13.751 [2024-07-12 17:42:52.695149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.751 [2024-07-12 17:42:52.695229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.751 [2024-07-12 17:42:52.695246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.751 [2024-07-12 17:42:52.695251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.751 [2024-07-12 17:42:52.695261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.751 [2024-07-12 17:42:52.695274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.751 qpair failed and we were unable to recover it. 00:32:13.751 [2024-07-12 17:42:52.705105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.751 [2024-07-12 17:42:52.705182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.751 [2024-07-12 17:42:52.705196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.751 [2024-07-12 17:42:52.705202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.751 [2024-07-12 17:42:52.705207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.751 [2024-07-12 17:42:52.705221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.751 qpair failed and we were unable to recover it. 00:32:13.751 [2024-07-12 17:42:52.715222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.751 [2024-07-12 17:42:52.715307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.751 [2024-07-12 17:42:52.715321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.751 [2024-07-12 17:42:52.715327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.751 [2024-07-12 17:42:52.715333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:13.751 [2024-07-12 17:42:52.715346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.751 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.725375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.725484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.725498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.725505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.725511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.725524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.735189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.735279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.735292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.735298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.735304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.735324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.745226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.745315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.745329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.745335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.745341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.745353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.755319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.755399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.755412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.755419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.755424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.755438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.765543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.765657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.765671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.765677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.765682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.765695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.775398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.775501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.775514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.775520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.775525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.775540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.785412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.785487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.785504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.785509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.785515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.785528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.795388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.795476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.795490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.795497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.795502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.795515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.805703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.805811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.805826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.805831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.805837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.805850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.815581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.815663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.815677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.815682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.815689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.815702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.825554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.825631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.825644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.825650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.825659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.825671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.835552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.835627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.835641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.835647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.835652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.835665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.845802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.845909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.845924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.845930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.845935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.845949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.855683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.855764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.855777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.855783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.855789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.855802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.865697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.865787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.865800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.865806] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.015 [2024-07-12 17:42:52.865811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.015 [2024-07-12 17:42:52.865825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.015 qpair failed and we were unable to recover it. 00:32:14.015 [2024-07-12 17:42:52.875749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.015 [2024-07-12 17:42:52.875847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.015 [2024-07-12 17:42:52.875861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.015 [2024-07-12 17:42:52.875867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.875873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.875886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.885972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.886076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.886090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.886096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.886101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.886114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.895822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.895944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.895959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.895965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.895971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.895984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.905841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.905921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.905934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.905939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.905945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.905957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.915848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.915925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.915939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.915944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.915953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.915966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.926092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.926198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.926213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.926218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.926224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.926237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.935888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.935968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.935982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.935988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.935993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.936007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.945956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.946038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.946052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.946058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.946063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.946076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.955991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.956077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.956091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.956097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.956102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.956115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.966216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.966340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.966355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.966361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.966367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.966380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.016 [2024-07-12 17:42:52.976077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.016 [2024-07-12 17:42:52.976175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.016 [2024-07-12 17:42:52.976189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.016 [2024-07-12 17:42:52.976195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.016 [2024-07-12 17:42:52.976200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.016 [2024-07-12 17:42:52.976213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.016 qpair failed and we were unable to recover it. 00:32:14.274 [2024-07-12 17:42:52.986099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.274 [2024-07-12 17:42:52.986209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.274 [2024-07-12 17:42:52.986223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.274 [2024-07-12 17:42:52.986229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.274 [2024-07-12 17:42:52.986235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.274 [2024-07-12 17:42:52.986248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.274 qpair failed and we were unable to recover it. 00:32:14.274 [2024-07-12 17:42:52.996122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.274 [2024-07-12 17:42:52.996202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.274 [2024-07-12 17:42:52.996215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.274 [2024-07-12 17:42:52.996221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.274 [2024-07-12 17:42:52.996226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.274 [2024-07-12 17:42:52.996239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.274 qpair failed and we were unable to recover it. 00:32:14.274 [2024-07-12 17:42:53.006345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.274 [2024-07-12 17:42:53.006488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.274 [2024-07-12 17:42:53.006503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.274 [2024-07-12 17:42:53.006513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.274 [2024-07-12 17:42:53.006518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.274 [2024-07-12 17:42:53.006531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.274 qpair failed and we were unable to recover it. 00:32:14.274 [2024-07-12 17:42:53.016155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.274 [2024-07-12 17:42:53.016271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.274 [2024-07-12 17:42:53.016286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.274 [2024-07-12 17:42:53.016293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.274 [2024-07-12 17:42:53.016298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.274 [2024-07-12 17:42:53.016311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.274 qpair failed and we were unable to recover it. 00:32:14.274 [2024-07-12 17:42:53.026201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.274 [2024-07-12 17:42:53.026296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.274 [2024-07-12 17:42:53.026311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.274 [2024-07-12 17:42:53.026318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.274 [2024-07-12 17:42:53.026323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.274 [2024-07-12 17:42:53.026336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.274 qpair failed and we were unable to recover it. 00:32:14.274 [2024-07-12 17:42:53.036247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.274 [2024-07-12 17:42:53.036326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.274 [2024-07-12 17:42:53.036340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.274 [2024-07-12 17:42:53.036346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.274 [2024-07-12 17:42:53.036351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.274 [2024-07-12 17:42:53.036364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.274 qpair failed and we were unable to recover it. 00:32:14.274 [2024-07-12 17:42:53.046476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.274 [2024-07-12 17:42:53.046583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.274 [2024-07-12 17:42:53.046597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.274 [2024-07-12 17:42:53.046603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.274 [2024-07-12 17:42:53.046609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.274 [2024-07-12 17:42:53.046623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.274 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.056308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.056443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.056458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.056463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.056469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.056482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.066331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.066418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.066432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.066438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.066443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.066456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.076345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.076425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.076438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.076444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.076450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.076463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.086606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.086710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.086724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.086730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.086735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.086748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.096418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.096500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.096513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.096522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.096527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.096540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.106434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.106517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.106530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.106536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.106542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.106555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.116501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.116600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.116615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.116620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.116626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.116640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.126739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.126844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.126858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.126863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.126869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.126881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.136556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.136648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.136664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.136670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.136675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.136688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.146598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.146677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.146691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.146697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.146702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.146715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.156618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.156702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.156720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.156726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.156731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.156745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.166869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.166973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.166988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.166994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.166999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.167012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.176706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.176790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.176804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.176809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.176814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.176827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.186706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.186786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.186802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.186808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.186814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.186826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.196710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.196790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.196803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.196809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.196815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.196828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.206965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.207068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.207082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.207088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.207094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.207106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.216826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.216921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.216936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.216944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.216950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.216964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.226771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.226879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.226894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.226900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.226906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.226922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.275 [2024-07-12 17:42:53.236892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.275 [2024-07-12 17:42:53.237015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.275 [2024-07-12 17:42:53.237029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.275 [2024-07-12 17:42:53.237035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.275 [2024-07-12 17:42:53.237041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.275 [2024-07-12 17:42:53.237054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.275 qpair failed and we were unable to recover it. 00:32:14.535 [2024-07-12 17:42:53.247094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.535 [2024-07-12 17:42:53.247204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.535 [2024-07-12 17:42:53.247218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.535 [2024-07-12 17:42:53.247225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.535 [2024-07-12 17:42:53.247230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.535 [2024-07-12 17:42:53.247244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-07-12 17:42:53.256959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.535 [2024-07-12 17:42:53.257042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.535 [2024-07-12 17:42:53.257056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.535 [2024-07-12 17:42:53.257062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.535 [2024-07-12 17:42:53.257067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.535 [2024-07-12 17:42:53.257079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-07-12 17:42:53.266960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.535 [2024-07-12 17:42:53.267052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.535 [2024-07-12 17:42:53.267066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.535 [2024-07-12 17:42:53.267072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.535 [2024-07-12 17:42:53.267077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.535 [2024-07-12 17:42:53.267090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-07-12 17:42:53.276898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.535 [2024-07-12 17:42:53.276974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.535 [2024-07-12 17:42:53.276991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.535 [2024-07-12 17:42:53.276997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.535 [2024-07-12 17:42:53.277003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.535 [2024-07-12 17:42:53.277015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-07-12 17:42:53.287212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.535 [2024-07-12 17:42:53.287365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.535 [2024-07-12 17:42:53.287380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.535 [2024-07-12 17:42:53.287386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.535 [2024-07-12 17:42:53.287392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.535 [2024-07-12 17:42:53.287405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-07-12 17:42:53.297053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.535 [2024-07-12 17:42:53.297148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.535 [2024-07-12 17:42:53.297162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.535 [2024-07-12 17:42:53.297167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.535 [2024-07-12 17:42:53.297173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.535 [2024-07-12 17:42:53.297186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-07-12 17:42:53.307116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.535 [2024-07-12 17:42:53.307200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.535 [2024-07-12 17:42:53.307213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.535 [2024-07-12 17:42:53.307219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.535 [2024-07-12 17:42:53.307224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.535 [2024-07-12 17:42:53.307237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-07-12 17:42:53.317112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.535 [2024-07-12 17:42:53.317190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.535 [2024-07-12 17:42:53.317204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.535 [2024-07-12 17:42:53.317209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.535 [2024-07-12 17:42:53.317215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.535 [2024-07-12 17:42:53.317231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.327341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.327447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.327461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.327466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.327472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.327485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.337180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.337286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.337301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.337307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.337312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.337326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.347218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.347341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.347356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.347362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.347368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.347381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.357199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.357279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.357293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.357299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.357305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.357318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.367450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.367558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.367575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.367581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.367587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.367600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.377285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.377377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.377390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.377396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.377402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.377415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.387332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.387405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.387419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.387425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.387430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.387443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.397364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.397445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.397458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.397464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.397470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.397483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.407608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.407707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.407721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.407727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.407736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.407749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.417489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.417567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.417580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.417586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.417591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.417603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.427473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.427556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.427570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.427575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.427581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.427593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.437524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.437607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.437620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.437627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.437632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.437645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.447723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.447830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.447844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.447850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.447855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.447869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.457566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.457656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.536 [2024-07-12 17:42:53.457670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.536 [2024-07-12 17:42:53.457676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.536 [2024-07-12 17:42:53.457681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.536 [2024-07-12 17:42:53.457694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-07-12 17:42:53.467694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.536 [2024-07-12 17:42:53.467814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.537 [2024-07-12 17:42:53.467828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.537 [2024-07-12 17:42:53.467834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.537 [2024-07-12 17:42:53.467839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.537 [2024-07-12 17:42:53.467852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-07-12 17:42:53.477636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.537 [2024-07-12 17:42:53.477717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.537 [2024-07-12 17:42:53.477731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.537 [2024-07-12 17:42:53.477736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.537 [2024-07-12 17:42:53.477742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.537 [2024-07-12 17:42:53.477754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-07-12 17:42:53.487812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.537 [2024-07-12 17:42:53.487917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.537 [2024-07-12 17:42:53.487930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.537 [2024-07-12 17:42:53.487936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.537 [2024-07-12 17:42:53.487942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.537 [2024-07-12 17:42:53.487954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-07-12 17:42:53.497676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.537 [2024-07-12 17:42:53.497756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.537 [2024-07-12 17:42:53.497769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.537 [2024-07-12 17:42:53.497778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.537 [2024-07-12 17:42:53.497783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.537 [2024-07-12 17:42:53.497796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.507645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.507722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.507736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.507741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.507746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.507759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.517747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.517834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.517847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.517853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.517858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.517870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.527967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.528079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.528093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.528098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.528104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.528117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.537838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.537921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.537934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.537940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.537945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.537958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.547885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.547961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.547975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.547980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.547986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.547999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.557936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.558017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.558031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.558037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.558042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.558055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.568112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.568214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.568228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.568234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.568239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.568252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.577908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.577994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.578008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.578013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.578019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.578032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.587976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.588049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.588063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.588072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.588077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.588090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.598032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.598156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.598171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.598177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.598182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.598195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.608236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.608395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.608410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.608415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.608421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.608434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.618089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.618169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.618182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.618188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.618194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.618207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.628111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.628185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.628198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.628204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.628210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.628223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.638137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.638216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.638229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.638235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.638241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.638257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.797 qpair failed and we were unable to recover it. 00:32:14.797 [2024-07-12 17:42:53.648372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.797 [2024-07-12 17:42:53.648470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.797 [2024-07-12 17:42:53.648485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.797 [2024-07-12 17:42:53.648490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.797 [2024-07-12 17:42:53.648496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.797 [2024-07-12 17:42:53.648509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.658280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.658400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.658414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.658420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.658425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.658438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.668236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.668319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.668332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.668338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.668343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.668356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.678328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.678407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.678423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.678429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.678434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.678446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.688527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.688635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.688649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.688655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.688660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.688673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.698338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.698434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.698449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.698455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.698460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.698473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.708421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.708547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.708562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.708567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.708573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.708586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.718443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.718573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.718588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.718593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.718598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.718615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.728643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.728751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.728765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.728770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.728776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.728788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.738485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.738577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.738591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.738596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.738601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.738613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.748532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.748611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.748625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.748630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.748636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.748649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:14.798 [2024-07-12 17:42:53.758456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.798 [2024-07-12 17:42:53.758531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.798 [2024-07-12 17:42:53.758545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.798 [2024-07-12 17:42:53.758550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.798 [2024-07-12 17:42:53.758556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:14.798 [2024-07-12 17:42:53.758568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.798 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.768787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.768892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.768910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.768916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.768922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.768934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.058 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.778589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.778670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.778683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.778689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.778694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.778707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.058 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.788621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.788705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.788718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.788724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.788729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.788742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.058 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.798726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.798813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.798827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.798832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.798837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.798850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.058 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.808940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.809046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.809060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.809066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.809072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.809087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.058 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.818759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.818840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.818853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.818859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.818864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.818877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.058 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.828794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.828877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.828890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.828896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.828902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.828915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.058 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.838825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.838920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.838934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.838940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.838946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.838958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.058 qpair failed and we were unable to recover it. 00:32:15.058 [2024-07-12 17:42:53.849044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.058 [2024-07-12 17:42:53.849146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.058 [2024-07-12 17:42:53.849160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.058 [2024-07-12 17:42:53.849166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.058 [2024-07-12 17:42:53.849171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.058 [2024-07-12 17:42:53.849184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.858888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.858973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.858990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.858996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.859001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.859014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.868875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.868956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.868970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.868975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.868980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.868993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.878959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.879032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.879046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.879052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.879057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.879070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.889233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.889383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.889398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.889405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.889410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.889423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.899014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.899108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.899122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.899128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.899137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.899150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.909069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.909147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.909161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.909167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.909172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.909185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.919086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.919166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.919180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.919186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.919192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.919205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.929354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.929461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.929476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.929482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.929487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.929500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.939162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.939275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.939289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.939295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.939301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.939314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.949253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.949346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.949360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.949366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.949372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.949385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.959263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.959344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.959358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.959364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.959370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.959383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.969474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.969578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.969592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.969599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.969604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.969618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.979281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.979362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.979375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.979381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.979386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.979399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.989330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.989408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.059 [2024-07-12 17:42:53.989421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.059 [2024-07-12 17:42:53.989427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.059 [2024-07-12 17:42:53.989436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.059 [2024-07-12 17:42:53.989448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.059 qpair failed and we were unable to recover it. 00:32:15.059 [2024-07-12 17:42:53.999370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.059 [2024-07-12 17:42:53.999456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.060 [2024-07-12 17:42:53.999470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.060 [2024-07-12 17:42:53.999476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.060 [2024-07-12 17:42:53.999482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.060 [2024-07-12 17:42:53.999495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.060 qpair failed and we were unable to recover it. 00:32:15.060 [2024-07-12 17:42:54.009580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.060 [2024-07-12 17:42:54.009693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.060 [2024-07-12 17:42:54.009707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.060 [2024-07-12 17:42:54.009713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.060 [2024-07-12 17:42:54.009719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.060 [2024-07-12 17:42:54.009732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.060 qpair failed and we were unable to recover it. 00:32:15.060 [2024-07-12 17:42:54.019446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.060 [2024-07-12 17:42:54.019580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.060 [2024-07-12 17:42:54.019595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.060 [2024-07-12 17:42:54.019601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.060 [2024-07-12 17:42:54.019606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.060 [2024-07-12 17:42:54.019619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.060 qpair failed and we were unable to recover it. 00:32:15.319 [2024-07-12 17:42:54.029401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.319 [2024-07-12 17:42:54.029480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.319 [2024-07-12 17:42:54.029494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.319 [2024-07-12 17:42:54.029500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.319 [2024-07-12 17:42:54.029505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.319 [2024-07-12 17:42:54.029518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.319 qpair failed and we were unable to recover it. 00:32:15.319 [2024-07-12 17:42:54.039447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.319 [2024-07-12 17:42:54.039522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.319 [2024-07-12 17:42:54.039536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.319 [2024-07-12 17:42:54.039542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.319 [2024-07-12 17:42:54.039547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.039560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.049748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.049889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.049904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.049910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.049916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.049929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.059582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.059661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.059675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.059681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.059686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.059699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.069597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.069673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.069687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.069693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.069698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.069711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.079599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.079681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.079695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.079704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.079710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.079722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.089907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.090017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.090031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.090037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.090043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.090055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.099659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.099743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.099757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.099762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.099768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.099781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.109696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.109779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.109793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.109799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.109805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.109817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.119805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.119917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.119932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.119938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.119944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.119958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.130012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.130116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.130131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.130137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.130143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.130156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.139785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.139874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.139888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.139895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.139900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.139913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.149907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.149987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.150001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.150007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.150012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.150026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.159855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.159976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.159991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.159997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.160003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.160016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.170100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.170245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.170264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.170273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.320 [2024-07-12 17:42:54.170279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.320 [2024-07-12 17:42:54.170292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.320 qpair failed and we were unable to recover it. 00:32:15.320 [2024-07-12 17:42:54.180036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.320 [2024-07-12 17:42:54.180123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.320 [2024-07-12 17:42:54.180136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.320 [2024-07-12 17:42:54.180143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.180148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.180161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.190041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.190142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.190157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.190163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.190168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.190181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.200050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.200122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.200137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.200142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.200148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.200160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.210298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.210432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.210446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.210452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.210458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.210471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.220161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.220248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.220266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.220272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.220278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.220290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.230153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.230263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.230278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.230284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.230289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.230302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.240197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.240288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.240302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.240308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.240314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.240326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.250393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.250493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.250507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.250512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.250518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.250531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.260276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.260353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.260372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.260378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.260383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.260396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.270287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.270382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.270396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.270402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.270407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.270419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.321 [2024-07-12 17:42:54.280330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.321 [2024-07-12 17:42:54.280435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.321 [2024-07-12 17:42:54.280449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.321 [2024-07-12 17:42:54.280455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.321 [2024-07-12 17:42:54.280460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.321 [2024-07-12 17:42:54.280473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.321 qpair failed and we were unable to recover it. 00:32:15.581 [2024-07-12 17:42:54.290558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.581 [2024-07-12 17:42:54.290666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.290680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.290687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.290692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.290705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.300400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.300487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.300500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.300506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.300512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.300528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.310350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.310433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.310447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.310453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.310459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.310472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.320409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.320485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.320499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.320504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.320510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.320522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.330597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.330704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.330719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.330725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.330730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.330742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.340540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.340636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.340650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.340656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.340661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.340674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.350576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.350656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.350673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.350678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.350684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.350697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.360594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.360671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.360685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.360690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.360696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.360709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.370829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.370941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.370955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.370961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.370966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.370980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.380683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.380773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.380786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.380792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.582 [2024-07-12 17:42:54.380797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.582 [2024-07-12 17:42:54.380810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.582 qpair failed and we were unable to recover it. 00:32:15.582 [2024-07-12 17:42:54.390694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.582 [2024-07-12 17:42:54.390780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.582 [2024-07-12 17:42:54.390798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.582 [2024-07-12 17:42:54.390804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.390812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.390826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.400675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.400755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.400769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.400775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.400781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.400794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.410992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.411095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.411110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.411115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.411121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.411134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.420731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.420853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.420867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.420873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.420878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.420891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.430761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.430833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.430847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.430853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.430858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.430871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.441034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.441157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.441172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.441178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.441184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.441197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.451231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.451383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.451398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.451403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.451409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.451422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.460909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.461037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.461051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.461057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.461062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.461075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.471033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.471123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.471136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.471142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.471147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.471161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.480986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.481060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.481073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.583 [2024-07-12 17:42:54.481079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.583 [2024-07-12 17:42:54.481088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.583 [2024-07-12 17:42:54.481101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.583 qpair failed and we were unable to recover it. 00:32:15.583 [2024-07-12 17:42:54.491164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.583 [2024-07-12 17:42:54.491287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.583 [2024-07-12 17:42:54.491302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.584 [2024-07-12 17:42:54.491308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.584 [2024-07-12 17:42:54.491313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.584 [2024-07-12 17:42:54.491326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.584 qpair failed and we were unable to recover it. 00:32:15.584 [2024-07-12 17:42:54.501035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.584 [2024-07-12 17:42:54.501127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.584 [2024-07-12 17:42:54.501141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.584 [2024-07-12 17:42:54.501147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.584 [2024-07-12 17:42:54.501152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.584 [2024-07-12 17:42:54.501166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.584 qpair failed and we were unable to recover it. 00:32:15.584 [2024-07-12 17:42:54.511082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.584 [2024-07-12 17:42:54.511214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.584 [2024-07-12 17:42:54.511228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.584 [2024-07-12 17:42:54.511234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.584 [2024-07-12 17:42:54.511239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.584 [2024-07-12 17:42:54.511252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.584 qpair failed and we were unable to recover it. 00:32:15.584 [2024-07-12 17:42:54.521129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.584 [2024-07-12 17:42:54.521204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.584 [2024-07-12 17:42:54.521218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.584 [2024-07-12 17:42:54.521224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.584 [2024-07-12 17:42:54.521229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.584 [2024-07-12 17:42:54.521242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.584 qpair failed and we were unable to recover it. 00:32:15.584 [2024-07-12 17:42:54.531380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.584 [2024-07-12 17:42:54.531512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.584 [2024-07-12 17:42:54.531526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.584 [2024-07-12 17:42:54.531532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.584 [2024-07-12 17:42:54.531538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.584 [2024-07-12 17:42:54.531551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.584 qpair failed and we were unable to recover it. 00:32:15.584 [2024-07-12 17:42:54.541193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.584 [2024-07-12 17:42:54.541275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.584 [2024-07-12 17:42:54.541288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.584 [2024-07-12 17:42:54.541294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.584 [2024-07-12 17:42:54.541299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.584 [2024-07-12 17:42:54.541312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.584 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.551216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.551317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.551332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.551339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.551344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.551358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.561267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.561341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.561354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.561360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.561365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.561378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.571414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.571522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.571536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.571545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.571551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.571563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.581270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.581355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.581368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.581374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.581379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.581392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.591378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.591457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.591470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.591476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.591482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.591495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.601395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.601469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.601483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.601489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.601494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.601507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.611656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.611805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.611819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.611824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.611829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.611842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.621507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.621591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.621604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.621610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.621615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.621627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.631526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.631609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.631623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.631629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.631634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.631647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.641533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.641644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.641658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.641664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.641669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.641683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.651783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.651892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.651906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.651912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.651917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.651929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.661620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.661708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.661721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.661730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.661736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.661748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.671696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.671824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.671838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.671844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.671850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.671863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.681705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.681781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.681794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.681800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.844 [2024-07-12 17:42:54.681805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.844 [2024-07-12 17:42:54.681818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.844 qpair failed and we were unable to recover it. 00:32:15.844 [2024-07-12 17:42:54.691863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.844 [2024-07-12 17:42:54.691970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.844 [2024-07-12 17:42:54.691984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.844 [2024-07-12 17:42:54.691991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.691996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.692009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.701834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.701917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.701930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.701936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.701942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.701954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.711806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.711878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.711892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.711898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.711903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.711915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.721833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.721966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.721981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.721986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.721992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.722004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.732084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.732223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.732237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.732243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.732248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.732264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.741825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.741912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.741926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.741932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.741937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.741950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.751944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.752026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.752043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.752049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.752054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.752067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.761966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.762041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.762055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.762060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.762066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.762079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.772205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.772380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.772394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.772400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.772405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.772418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.782027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.782108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.782121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.782127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.782132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.782145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.792109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.792186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.792200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.792206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.792211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.792227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:15.845 [2024-07-12 17:42:54.802135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.845 [2024-07-12 17:42:54.802209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.845 [2024-07-12 17:42:54.802222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.845 [2024-07-12 17:42:54.802228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.845 [2024-07-12 17:42:54.802233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:15.845 [2024-07-12 17:42:54.802246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:15.845 qpair failed and we were unable to recover it. 00:32:16.105 [2024-07-12 17:42:54.812346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.105 [2024-07-12 17:42:54.812450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.105 [2024-07-12 17:42:54.812464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.105 [2024-07-12 17:42:54.812470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.105 [2024-07-12 17:42:54.812476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.105 [2024-07-12 17:42:54.812489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.105 qpair failed and we were unable to recover it. 00:32:16.105 [2024-07-12 17:42:54.822134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.105 [2024-07-12 17:42:54.822221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.105 [2024-07-12 17:42:54.822234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.105 [2024-07-12 17:42:54.822240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.105 [2024-07-12 17:42:54.822245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.105 [2024-07-12 17:42:54.822261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.105 qpair failed and we were unable to recover it. 00:32:16.105 [2024-07-12 17:42:54.832204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.105 [2024-07-12 17:42:54.832285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.105 [2024-07-12 17:42:54.832298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.105 [2024-07-12 17:42:54.832304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.105 [2024-07-12 17:42:54.832309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.105 [2024-07-12 17:42:54.832322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.105 qpair failed and we were unable to recover it. 00:32:16.105 [2024-07-12 17:42:54.842245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.105 [2024-07-12 17:42:54.842334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.105 [2024-07-12 17:42:54.842351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.105 [2024-07-12 17:42:54.842357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.105 [2024-07-12 17:42:54.842363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.105 [2024-07-12 17:42:54.842375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.105 qpair failed and we were unable to recover it. 00:32:16.105 [2024-07-12 17:42:54.852380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.105 [2024-07-12 17:42:54.852487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.105 [2024-07-12 17:42:54.852502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.852508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.852513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.852526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.862341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.862427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.862441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.862447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.862452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.862465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.872330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.872461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.872475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.872481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.872486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.872499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.882370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.882467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.882480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.882486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.882491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.882601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.892616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.892745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.892759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.892765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.892770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.892782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.902459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.902547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.902561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.902567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.902573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.902585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.912478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.912579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.912594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.912599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.912604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.912617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.922496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.922579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.922593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.922599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.922605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.922618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.932659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.932763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.932780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.932786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.932791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.932804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.942560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.942653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.942668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.942674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.942679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.942692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.952640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.952714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.952728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.952734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.952739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.952752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.962684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.962771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.962785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.962791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.962796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.962809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.972858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.972961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.972974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.972980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.972991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.973005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.982718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.982846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.982860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.982866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.982871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.982884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.106 qpair failed and we were unable to recover it. 00:32:16.106 [2024-07-12 17:42:54.992661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.106 [2024-07-12 17:42:54.992746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.106 [2024-07-12 17:42:54.992759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.106 [2024-07-12 17:42:54.992765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.106 [2024-07-12 17:42:54.992770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.106 [2024-07-12 17:42:54.992783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.107 qpair failed and we were unable to recover it. 00:32:16.107 [2024-07-12 17:42:55.002759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.107 [2024-07-12 17:42:55.002835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.107 [2024-07-12 17:42:55.002849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.107 [2024-07-12 17:42:55.002855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.107 [2024-07-12 17:42:55.002860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.107 [2024-07-12 17:42:55.002872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.107 qpair failed and we were unable to recover it. 00:32:16.107 [2024-07-12 17:42:55.013002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.107 [2024-07-12 17:42:55.013110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.107 [2024-07-12 17:42:55.013125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.107 [2024-07-12 17:42:55.013130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.107 [2024-07-12 17:42:55.013135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.107 [2024-07-12 17:42:55.013148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.107 qpair failed and we were unable to recover it. 00:32:16.107 [2024-07-12 17:42:55.022965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.107 [2024-07-12 17:42:55.023057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.107 [2024-07-12 17:42:55.023071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.107 [2024-07-12 17:42:55.023077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.107 [2024-07-12 17:42:55.023082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.107 [2024-07-12 17:42:55.023095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.107 qpair failed and we were unable to recover it. 00:32:16.107 [2024-07-12 17:42:55.032869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.107 [2024-07-12 17:42:55.032960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.107 [2024-07-12 17:42:55.032976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.107 [2024-07-12 17:42:55.032982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.107 [2024-07-12 17:42:55.032988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.107 [2024-07-12 17:42:55.033001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.107 qpair failed and we were unable to recover it. 00:32:16.107 [2024-07-12 17:42:55.042937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.107 [2024-07-12 17:42:55.043019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.107 [2024-07-12 17:42:55.043032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.107 [2024-07-12 17:42:55.043038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.107 [2024-07-12 17:42:55.043044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.107 [2024-07-12 17:42:55.043057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.107 qpair failed and we were unable to recover it. 00:32:16.107 [2024-07-12 17:42:55.053115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.107 [2024-07-12 17:42:55.053218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.107 [2024-07-12 17:42:55.053233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.107 [2024-07-12 17:42:55.053239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.107 [2024-07-12 17:42:55.053245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.107 [2024-07-12 17:42:55.053261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.107 qpair failed and we were unable to recover it. 00:32:16.107 [2024-07-12 17:42:55.062969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.107 [2024-07-12 17:42:55.063050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.107 [2024-07-12 17:42:55.063063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.107 [2024-07-12 17:42:55.063071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.107 [2024-07-12 17:42:55.063077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.107 [2024-07-12 17:42:55.063090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.107 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.072987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.073071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.073085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.073091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.073096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.073109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.083063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.083151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.083168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.083175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.083181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.083194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.093251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.093387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.093401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.093407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.093413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.093426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.103129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.103216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.103229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.103235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.103241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.103257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.113134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.113214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.113228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.113233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.113239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.113252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.123164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.123240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.123257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.123264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.123270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.123282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.133311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.133420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.133434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.133440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.133445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.133458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.143219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.143305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.143319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.143324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.143330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.143343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.153249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.153331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.367 [2024-07-12 17:42:55.153345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.367 [2024-07-12 17:42:55.153354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.367 [2024-07-12 17:42:55.153359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.367 [2024-07-12 17:42:55.153372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.367 qpair failed and we were unable to recover it. 00:32:16.367 [2024-07-12 17:42:55.163285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.367 [2024-07-12 17:42:55.163370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.163384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.163390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.163396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.163409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.173564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.173705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.173720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.173725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.173731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.173744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.183285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.183373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.183386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.183392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.183398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.183411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.193387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.193471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.193485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.193491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.193497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.193510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.203453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.203602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.203616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.203622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.203628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.203641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.213654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.213807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.213821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.213827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.213832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.213845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.223491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.223574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.223588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.223594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.223599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.223612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.233473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.233553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.233566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.233571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.233577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.233589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.243571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.243651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.243668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.243673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.243679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.243691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.253809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.253965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.253980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.253986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.253992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.254005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.263657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.263751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.263768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.263774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.263779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.263792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.273671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.273748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.273762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.273768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.273774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.273787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.283686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.283770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.283784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.283790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.283796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.283812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.293827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.293928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.293943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.293949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.293954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.368 [2024-07-12 17:42:55.293967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.368 qpair failed and we were unable to recover it. 00:32:16.368 [2024-07-12 17:42:55.303758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.368 [2024-07-12 17:42:55.303868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.368 [2024-07-12 17:42:55.303882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.368 [2024-07-12 17:42:55.303888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.368 [2024-07-12 17:42:55.303893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.369 [2024-07-12 17:42:55.303906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.369 qpair failed and we were unable to recover it. 00:32:16.369 [2024-07-12 17:42:55.313838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.369 [2024-07-12 17:42:55.313921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.369 [2024-07-12 17:42:55.313935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.369 [2024-07-12 17:42:55.313940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.369 [2024-07-12 17:42:55.313946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.369 [2024-07-12 17:42:55.313959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.369 qpair failed and we were unable to recover it. 00:32:16.369 [2024-07-12 17:42:55.323804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.369 [2024-07-12 17:42:55.323886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.369 [2024-07-12 17:42:55.323899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.369 [2024-07-12 17:42:55.323905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.369 [2024-07-12 17:42:55.323910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.369 [2024-07-12 17:42:55.323923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.369 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.334085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.334239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.334260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.334266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.334271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.334285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.343889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.343965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.343979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.343984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.343990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.344002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.353914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.354018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.354033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.354039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.354045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.354058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.363959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.364040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.364053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.364059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.364064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.364077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.374200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.374321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.374336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.374341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.374347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.374363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.384049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.384164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.384178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.384184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.384190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.384205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.394132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.394208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.394222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.394228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.394233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.394246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.404125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.404219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.404233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.404239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.404244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.404261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.414352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.414454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.414468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.414474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.414480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.414493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.424224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.424320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.424337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.424343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.424348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.424361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.434201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.434299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.434315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.434321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.434326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.434341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.444243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.444337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.444351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.444357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.444362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.444375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.454469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.454581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.454595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.454601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.454607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.454620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.464319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.464409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.464423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.464429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.464441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.464453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.474313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.474392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.474406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.474413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.474419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.474432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.484349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.484425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.484438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.484444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.484450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.484462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.494584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.494691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.494706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.494712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.494717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.494730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.504409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.504517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.504532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.504537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.504543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.504556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.514462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.514558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.514576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.514582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.514587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.514601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.524464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.524543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.524557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.524562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.524568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.524580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.534745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.534893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.534907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.534913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.629 [2024-07-12 17:42:55.534919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.629 [2024-07-12 17:42:55.534932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.629 qpair failed and we were unable to recover it. 00:32:16.629 [2024-07-12 17:42:55.544601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.629 [2024-07-12 17:42:55.544679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.629 [2024-07-12 17:42:55.544693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.629 [2024-07-12 17:42:55.544699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.630 [2024-07-12 17:42:55.544704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.630 [2024-07-12 17:42:55.544717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.630 qpair failed and we were unable to recover it. 00:32:16.630 [2024-07-12 17:42:55.554599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.630 [2024-07-12 17:42:55.554673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.630 [2024-07-12 17:42:55.554687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.630 [2024-07-12 17:42:55.554693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.630 [2024-07-12 17:42:55.554701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.630 [2024-07-12 17:42:55.554714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.630 qpair failed and we were unable to recover it. 00:32:16.630 [2024-07-12 17:42:55.564652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.630 [2024-07-12 17:42:55.564728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.630 [2024-07-12 17:42:55.564742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.630 [2024-07-12 17:42:55.564748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.630 [2024-07-12 17:42:55.564753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.630 [2024-07-12 17:42:55.564765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.630 qpair failed and we were unable to recover it. 00:32:16.630 [2024-07-12 17:42:55.574866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.630 [2024-07-12 17:42:55.574977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.630 [2024-07-12 17:42:55.574991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.630 [2024-07-12 17:42:55.574997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.630 [2024-07-12 17:42:55.575002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.630 [2024-07-12 17:42:55.575015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.630 qpair failed and we were unable to recover it. 00:32:16.630 [2024-07-12 17:42:55.584715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.630 [2024-07-12 17:42:55.584806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.630 [2024-07-12 17:42:55.584824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.630 [2024-07-12 17:42:55.584829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.630 [2024-07-12 17:42:55.584835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.630 [2024-07-12 17:42:55.584847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.630 qpair failed and we were unable to recover it. 00:32:16.630 [2024-07-12 17:42:55.594672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.630 [2024-07-12 17:42:55.594746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.630 [2024-07-12 17:42:55.594760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.630 [2024-07-12 17:42:55.594766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.630 [2024-07-12 17:42:55.594771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.630 [2024-07-12 17:42:55.594783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.630 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.604747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.604851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.604866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.604873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.604878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.604892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.614966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.615113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.615127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.615133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.615138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.615152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.624842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.624957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.624972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.624978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.624984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.624998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.634831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.634905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.634918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.634924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.634930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.634943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.644988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.645072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.645086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.645096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.645101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.645114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.655161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.655270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.655288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.655294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.655299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.655312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.664955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.665051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.665065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.665071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.665076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.665089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.675061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.675138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.675152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.675158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.675163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.675176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.685027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.685147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.685161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.685167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.685173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.685186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.695286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.695386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.695401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.695407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.695413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.695426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.705121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.705204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.705217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.705222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.705228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.705241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.715057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.715155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.715169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.715175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.715180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.715194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.725164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.725245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.725264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.725270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.725276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.725288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.735394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.735512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.735527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.735539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.735546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.735559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.745237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.745429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.745444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.745450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.745456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.745470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.755313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.755443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.755458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.755464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.755470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.755483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.765251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.765336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.765350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.765356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.765362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.765375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.775550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.775658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.775673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.775679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.775684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.775697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.785380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.785497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.785512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.785518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.785524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.785537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.890 [2024-07-12 17:42:55.795436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.890 [2024-07-12 17:42:55.795519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.890 [2024-07-12 17:42:55.795532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.890 [2024-07-12 17:42:55.795538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.890 [2024-07-12 17:42:55.795544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.890 [2024-07-12 17:42:55.795557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.890 qpair failed and we were unable to recover it. 00:32:16.891 [2024-07-12 17:42:55.805473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.891 [2024-07-12 17:42:55.805557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.891 [2024-07-12 17:42:55.805570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.891 [2024-07-12 17:42:55.805576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.891 [2024-07-12 17:42:55.805582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.891 [2024-07-12 17:42:55.805595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.891 qpair failed and we were unable to recover it. 00:32:16.891 [2024-07-12 17:42:55.815665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.891 [2024-07-12 17:42:55.815772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.891 [2024-07-12 17:42:55.815787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.891 [2024-07-12 17:42:55.815793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.891 [2024-07-12 17:42:55.815798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.891 [2024-07-12 17:42:55.815811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.891 qpair failed and we were unable to recover it. 00:32:16.891 [2024-07-12 17:42:55.825509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.891 [2024-07-12 17:42:55.825601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.891 [2024-07-12 17:42:55.825621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.891 [2024-07-12 17:42:55.825627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.891 [2024-07-12 17:42:55.825633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.891 [2024-07-12 17:42:55.825646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.891 qpair failed and we were unable to recover it. 00:32:16.891 [2024-07-12 17:42:55.835555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.891 [2024-07-12 17:42:55.835634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.891 [2024-07-12 17:42:55.835648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.891 [2024-07-12 17:42:55.835653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.891 [2024-07-12 17:42:55.835658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.891 [2024-07-12 17:42:55.835671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.891 qpair failed and we were unable to recover it. 00:32:16.891 [2024-07-12 17:42:55.845591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.891 [2024-07-12 17:42:55.845663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.891 [2024-07-12 17:42:55.845678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.891 [2024-07-12 17:42:55.845683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.891 [2024-07-12 17:42:55.845689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.891 [2024-07-12 17:42:55.845702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.891 qpair failed and we were unable to recover it. 00:32:16.891 [2024-07-12 17:42:55.855818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.891 [2024-07-12 17:42:55.855926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.891 [2024-07-12 17:42:55.855940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.891 [2024-07-12 17:42:55.855946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.891 [2024-07-12 17:42:55.855952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:16.891 [2024-07-12 17:42:55.855965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:16.891 qpair failed and we were unable to recover it. 00:32:17.150 [2024-07-12 17:42:55.865677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.150 [2024-07-12 17:42:55.865798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.150 [2024-07-12 17:42:55.865813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.150 [2024-07-12 17:42:55.865820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.150 [2024-07-12 17:42:55.865825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.150 [2024-07-12 17:42:55.865841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.150 qpair failed and we were unable to recover it. 00:32:17.150 [2024-07-12 17:42:55.875661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.150 [2024-07-12 17:42:55.875739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.150 [2024-07-12 17:42:55.875752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.150 [2024-07-12 17:42:55.875758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.150 [2024-07-12 17:42:55.875763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.150 [2024-07-12 17:42:55.875775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.150 qpair failed and we were unable to recover it. 00:32:17.150 [2024-07-12 17:42:55.885717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.150 [2024-07-12 17:42:55.885791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.150 [2024-07-12 17:42:55.885805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.150 [2024-07-12 17:42:55.885810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.150 [2024-07-12 17:42:55.885815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.150 [2024-07-12 17:42:55.885828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.150 qpair failed and we were unable to recover it. 00:32:17.150 [2024-07-12 17:42:55.895963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.150 [2024-07-12 17:42:55.896063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.150 [2024-07-12 17:42:55.896077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.150 [2024-07-12 17:42:55.896084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.150 [2024-07-12 17:42:55.896089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.150 [2024-07-12 17:42:55.896102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.150 qpair failed and we were unable to recover it. 00:32:17.150 [2024-07-12 17:42:55.905733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.150 [2024-07-12 17:42:55.905817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.150 [2024-07-12 17:42:55.905831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.150 [2024-07-12 17:42:55.905836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.150 [2024-07-12 17:42:55.905842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.150 [2024-07-12 17:42:55.905855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.150 qpair failed and we were unable to recover it. 00:32:17.150 [2024-07-12 17:42:55.915832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.150 [2024-07-12 17:42:55.915914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.150 [2024-07-12 17:42:55.915931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.150 [2024-07-12 17:42:55.915937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.150 [2024-07-12 17:42:55.915942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.150 [2024-07-12 17:42:55.915955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.150 qpair failed and we were unable to recover it. 00:32:17.150 [2024-07-12 17:42:55.925889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:55.926022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:55.926037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:55.926044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:55.926050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:55.926063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:55.936112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:55.936286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:55.936301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:55.936307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:55.936313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:55.936327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:55.945932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:55.946009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:55.946023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:55.946029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:55.946035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:55.946048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:55.955948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:55.956030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:55.956044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:55.956050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:55.956063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:55.956076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:55.965922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:55.965998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:55.966012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:55.966018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:55.966023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:55.966036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:55.976249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:55.976378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:55.976393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:55.976399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:55.976404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:55.976417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:55.986136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:55.986269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:55.986283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:55.986289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:55.986295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:55.986308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:55.996104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:55.996176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:55.996189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:55.996195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:55.996200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:55.996213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.006153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.006237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.006250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.006261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.006266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.006279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.016356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.016464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.016479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.016484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.016490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.016503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.026206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.026297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.026311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.026316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.026322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.026335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.036213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.036299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.036314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.036320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.036325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.036338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.046242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.046334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.046348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.046353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.046362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.046375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.056462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.056569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.056584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.056590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.056595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.056607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.066240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.066336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.066349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.066355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.066360] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.066373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.076340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.076421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.076435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.076441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.076446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.076460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.086382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.086459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.086472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.086478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.086484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.086496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.096620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.096736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.096750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.096756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.096762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.096776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.106470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.106559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.106573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.106579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.106584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.106597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.151 [2024-07-12 17:42:56.116490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.151 [2024-07-12 17:42:56.116567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.151 [2024-07-12 17:42:56.116580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.151 [2024-07-12 17:42:56.116586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.151 [2024-07-12 17:42:56.116592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.151 [2024-07-12 17:42:56.116604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.151 qpair failed and we were unable to recover it. 00:32:17.411 [2024-07-12 17:42:56.126525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.411 [2024-07-12 17:42:56.126646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.411 [2024-07-12 17:42:56.126661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.411 [2024-07-12 17:42:56.126667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.411 [2024-07-12 17:42:56.126673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.411 [2024-07-12 17:42:56.126686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.411 qpair failed and we were unable to recover it. 00:32:17.411 [2024-07-12 17:42:56.136801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.411 [2024-07-12 17:42:56.136912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.411 [2024-07-12 17:42:56.136926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.411 [2024-07-12 17:42:56.136935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.411 [2024-07-12 17:42:56.136940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.411 [2024-07-12 17:42:56.136952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.411 qpair failed and we were unable to recover it. 00:32:17.411 [2024-07-12 17:42:56.146585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.411 [2024-07-12 17:42:56.146663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.411 [2024-07-12 17:42:56.146677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.411 [2024-07-12 17:42:56.146682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.411 [2024-07-12 17:42:56.146688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.411 [2024-07-12 17:42:56.146701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.411 qpair failed and we were unable to recover it. 00:32:17.411 [2024-07-12 17:42:56.156579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.411 [2024-07-12 17:42:56.156659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.411 [2024-07-12 17:42:56.156672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.411 [2024-07-12 17:42:56.156678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.411 [2024-07-12 17:42:56.156684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.411 [2024-07-12 17:42:56.156696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.411 qpair failed and we were unable to recover it. 00:32:17.411 [2024-07-12 17:42:56.166654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.411 [2024-07-12 17:42:56.166727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.411 [2024-07-12 17:42:56.166740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.411 [2024-07-12 17:42:56.166746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.411 [2024-07-12 17:42:56.166752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.411 [2024-07-12 17:42:56.166765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.411 qpair failed and we were unable to recover it. 00:32:17.411 [2024-07-12 17:42:56.176888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.411 [2024-07-12 17:42:56.177011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.411 [2024-07-12 17:42:56.177025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.411 [2024-07-12 17:42:56.177031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.177036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.177050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.186719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.186824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.186839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.186845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.186850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.186863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.196744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.196824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.196837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.196843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.196848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.196861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.206780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.206860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.206873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.206878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.206884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.206897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.217011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.217164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.217178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.217184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.217190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.217202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.226847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.226938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.226953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.226962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.226968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.226980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.236903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.236998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.237013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.237018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.237024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.237037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.246931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.247015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.247028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.247034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.247040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.247053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.257154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.257259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.257273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.257279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.257285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.257298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.266921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.267017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.267031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.267038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.267044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.267056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.276951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.277035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.277049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.277056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.277061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.277074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.287066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.287169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.287183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.287190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.287195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.287208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.297315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.297463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.297477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.297483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.297489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.297502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.307133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.307220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.307233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.307239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.307245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.307261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.317089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.317164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.317182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.317188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.317193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.317206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.327197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.327274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.327288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.327294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.327299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.327312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.337437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.337544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.337558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.337564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.337570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.337582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.347285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.347415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.347429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.347435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.347440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.347453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.357321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.357403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.357417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.357423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.357428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.357444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.367311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.367391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.367405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.367410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.367416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.367429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.412 [2024-07-12 17:42:56.377536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.412 [2024-07-12 17:42:56.377649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.412 [2024-07-12 17:42:56.377663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.412 [2024-07-12 17:42:56.377669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.412 [2024-07-12 17:42:56.377674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.412 [2024-07-12 17:42:56.377687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.412 qpair failed and we were unable to recover it. 00:32:17.672 [2024-07-12 17:42:56.387448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.672 [2024-07-12 17:42:56.387581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.672 [2024-07-12 17:42:56.387596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.672 [2024-07-12 17:42:56.387602] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.672 [2024-07-12 17:42:56.387608] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.672 [2024-07-12 17:42:56.387621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.672 qpair failed and we were unable to recover it. 00:32:17.672 [2024-07-12 17:42:56.397457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.397535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.397549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.397554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.397560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.397572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.407466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.407546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.407563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.407568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.407574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.407586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.417682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.417789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.417803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.417809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.417814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.417827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.427541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.427632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.427645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.427651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.427657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.427669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.437528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.437643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.437658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.437665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.437670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.437683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.447600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.447697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.447711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.447718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.447723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.447743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.457822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.457923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.457937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.457943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.457948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.457961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.467688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.467769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.467783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.467789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.467794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.467807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.477728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.477807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.477820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.477826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.477831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.477844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.487729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.487805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.487818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.487824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.487829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.487842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.497966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.498082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.498097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.498103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.498108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.498121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.507817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.507894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.507908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.507914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.507919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.507932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.517830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.517911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.517925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.517931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.517937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.517950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.527886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.527969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.527983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.527988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.527994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.528006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.538110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.538258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.538272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.538278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.538286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.538300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.547940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.548022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.548036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.548041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.548047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.548060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.557997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.558071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.558084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.558090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.558095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.558108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.568004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.568078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.568092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.568098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.568103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.568116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.578273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.578377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.578391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.578397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.578402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.578415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.588089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.588191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.588206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.588212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.588217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.588230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.598133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.598275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.598289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.598295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.598301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.598313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.608155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.608247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.608264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.608269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.608275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.608288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.618371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.673 [2024-07-12 17:42:56.618480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.673 [2024-07-12 17:42:56.618494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.673 [2024-07-12 17:42:56.618500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.673 [2024-07-12 17:42:56.618505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.673 [2024-07-12 17:42:56.618518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.673 qpair failed and we were unable to recover it. 00:32:17.673 [2024-07-12 17:42:56.628195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.674 [2024-07-12 17:42:56.628286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.674 [2024-07-12 17:42:56.628299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.674 [2024-07-12 17:42:56.628308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.674 [2024-07-12 17:42:56.628313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.674 [2024-07-12 17:42:56.628330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.674 qpair failed and we were unable to recover it. 00:32:17.674 [2024-07-12 17:42:56.638244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.674 [2024-07-12 17:42:56.638329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.674 [2024-07-12 17:42:56.638342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.674 [2024-07-12 17:42:56.638348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.674 [2024-07-12 17:42:56.638353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.674 [2024-07-12 17:42:56.638366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.674 qpair failed and we were unable to recover it. 00:32:17.933 [2024-07-12 17:42:56.648276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.933 [2024-07-12 17:42:56.648358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.933 [2024-07-12 17:42:56.648372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.933 [2024-07-12 17:42:56.648378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.933 [2024-07-12 17:42:56.648384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.933 [2024-07-12 17:42:56.648397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.933 qpair failed and we were unable to recover it. 00:32:17.933 [2024-07-12 17:42:56.658508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.933 [2024-07-12 17:42:56.658653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.933 [2024-07-12 17:42:56.658667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.933 [2024-07-12 17:42:56.658673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.933 [2024-07-12 17:42:56.658679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.933 [2024-07-12 17:42:56.658691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.933 qpair failed and we were unable to recover it. 00:32:17.933 [2024-07-12 17:42:56.668387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.933 [2024-07-12 17:42:56.668507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.933 [2024-07-12 17:42:56.668521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.933 [2024-07-12 17:42:56.668528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.933 [2024-07-12 17:42:56.668533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.933 [2024-07-12 17:42:56.668547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.933 qpair failed and we were unable to recover it. 00:32:17.933 [2024-07-12 17:42:56.678437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.933 [2024-07-12 17:42:56.678517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.933 [2024-07-12 17:42:56.678531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.933 [2024-07-12 17:42:56.678537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.933 [2024-07-12 17:42:56.678542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.933 [2024-07-12 17:42:56.678555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.933 qpair failed and we were unable to recover it. 00:32:17.933 [2024-07-12 17:42:56.688445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.933 [2024-07-12 17:42:56.688551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.933 [2024-07-12 17:42:56.688565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.933 [2024-07-12 17:42:56.688571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.933 [2024-07-12 17:42:56.688576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.933 [2024-07-12 17:42:56.688588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.933 qpair failed and we were unable to recover it. 00:32:17.933 [2024-07-12 17:42:56.698670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.933 [2024-07-12 17:42:56.698782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.933 [2024-07-12 17:42:56.698796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.933 [2024-07-12 17:42:56.698802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.933 [2024-07-12 17:42:56.698807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.933 [2024-07-12 17:42:56.698820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.933 qpair failed and we were unable to recover it. 00:32:17.933 [2024-07-12 17:42:56.708498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.933 [2024-07-12 17:42:56.708609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.933 [2024-07-12 17:42:56.708623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.933 [2024-07-12 17:42:56.708629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.933 [2024-07-12 17:42:56.708634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.933 [2024-07-12 17:42:56.708647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.933 qpair failed and we were unable to recover it. 00:32:17.933 [2024-07-12 17:42:56.718559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.933 [2024-07-12 17:42:56.718644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.933 [2024-07-12 17:42:56.718658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.933 [2024-07-12 17:42:56.718667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.718672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.718685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.728568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.728648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.728662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.728667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.728673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.728686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.738810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.738914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.738928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.738934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.738939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.738952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.748638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.748714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.748727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.748733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.748738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.748751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.758693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.758779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.758797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.758803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.758809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.758823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.768706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.768814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.768829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.768835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.768841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.768854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.778934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.779091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.779105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.779111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.779116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.779129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.788781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.788869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.788883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.788889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.788894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.788907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.798811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.798885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.798898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.798904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.798909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.798922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.808888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.809018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.809035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.809041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.809046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.809059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.819063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.819167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.819181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.819187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.819193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.819205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.828932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.829016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.829030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.829036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.829041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.829054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.838928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.839007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.839020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.839026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.839032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.839045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.848950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.849060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.849075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.849081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.849086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.849102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.859199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.934 [2024-07-12 17:42:56.859309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.934 [2024-07-12 17:42:56.859323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.934 [2024-07-12 17:42:56.859329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.934 [2024-07-12 17:42:56.859335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.934 [2024-07-12 17:42:56.859347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.934 qpair failed and we were unable to recover it. 00:32:17.934 [2024-07-12 17:42:56.869035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.935 [2024-07-12 17:42:56.869115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.935 [2024-07-12 17:42:56.869129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.935 [2024-07-12 17:42:56.869135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.935 [2024-07-12 17:42:56.869140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.935 [2024-07-12 17:42:56.869153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.935 qpair failed and we were unable to recover it. 00:32:17.935 [2024-07-12 17:42:56.879049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.935 [2024-07-12 17:42:56.879163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.935 [2024-07-12 17:42:56.879177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.935 [2024-07-12 17:42:56.879183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.935 [2024-07-12 17:42:56.879189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.935 [2024-07-12 17:42:56.879201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.935 qpair failed and we were unable to recover it. 00:32:17.935 [2024-07-12 17:42:56.889090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.935 [2024-07-12 17:42:56.889174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.935 [2024-07-12 17:42:56.889187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.935 [2024-07-12 17:42:56.889193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.935 [2024-07-12 17:42:56.889199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.935 [2024-07-12 17:42:56.889212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.935 qpair failed and we were unable to recover it. 00:32:17.935 [2024-07-12 17:42:56.899350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.935 [2024-07-12 17:42:56.899457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.935 [2024-07-12 17:42:56.899474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.935 [2024-07-12 17:42:56.899481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.935 [2024-07-12 17:42:56.899486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:17.935 [2024-07-12 17:42:56.899499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:17.935 qpair failed and we were unable to recover it. 00:32:18.194 [2024-07-12 17:42:56.909144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.194 [2024-07-12 17:42:56.909237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.194 [2024-07-12 17:42:56.909250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.194 [2024-07-12 17:42:56.909259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.194 [2024-07-12 17:42:56.909265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.194 [2024-07-12 17:42:56.909277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.194 qpair failed and we were unable to recover it. 00:32:18.194 [2024-07-12 17:42:56.919108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.194 [2024-07-12 17:42:56.919181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.194 [2024-07-12 17:42:56.919194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.194 [2024-07-12 17:42:56.919200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.194 [2024-07-12 17:42:56.919205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.194 [2024-07-12 17:42:56.919218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.194 qpair failed and we were unable to recover it. 00:32:18.194 [2024-07-12 17:42:56.929281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.194 [2024-07-12 17:42:56.929358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.194 [2024-07-12 17:42:56.929371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.194 [2024-07-12 17:42:56.929377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.194 [2024-07-12 17:42:56.929383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.194 [2024-07-12 17:42:56.929396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.194 qpair failed and we were unable to recover it. 00:32:18.194 [2024-07-12 17:42:56.939385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.194 [2024-07-12 17:42:56.939490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.194 [2024-07-12 17:42:56.939507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.194 [2024-07-12 17:42:56.939513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.194 [2024-07-12 17:42:56.939519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.194 [2024-07-12 17:42:56.939535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.194 qpair failed and we were unable to recover it. 00:32:18.194 [2024-07-12 17:42:56.949277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.194 [2024-07-12 17:42:56.949364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.194 [2024-07-12 17:42:56.949377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.194 [2024-07-12 17:42:56.949383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.194 [2024-07-12 17:42:56.949389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.194 [2024-07-12 17:42:56.949401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.194 qpair failed and we were unable to recover it. 00:32:18.194 [2024-07-12 17:42:56.959295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.194 [2024-07-12 17:42:56.959369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.194 [2024-07-12 17:42:56.959382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.194 [2024-07-12 17:42:56.959388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.194 [2024-07-12 17:42:56.959393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.194 [2024-07-12 17:42:56.959405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.194 qpair failed and we were unable to recover it. 00:32:18.194 [2024-07-12 17:42:56.969319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.194 [2024-07-12 17:42:56.969394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.194 [2024-07-12 17:42:56.969408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.194 [2024-07-12 17:42:56.969413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.194 [2024-07-12 17:42:56.969419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.194 [2024-07-12 17:42:56.969431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.194 qpair failed and we were unable to recover it. 00:32:18.194 [2024-07-12 17:42:56.979537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:56.979647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:56.979661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:56.979667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:56.979673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:56.979686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:56.989438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:56.989526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:56.989543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:56.989549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:56.989554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:56.989567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:56.999414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:56.999517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:56.999531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:56.999537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:56.999543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:56.999556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.009372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.009498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.009513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.009518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.009524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.009537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.019607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.019713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.019727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.019733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.019738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.019751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.029534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.029618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.029631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.029637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.029645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.029658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.039538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.039621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.039634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.039640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.039645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.039659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.049577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.049665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.049679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.049685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.049690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.049703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.059849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.059958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.059972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.059978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.059983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.059996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.069644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.069722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.069736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.069741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.069746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.069759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.079601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.079694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.079708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.079714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.079720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.079732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.089649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.089760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.089775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.089781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.089787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.089799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.099858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.099959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.099973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.099979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.099984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.099997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.109783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.109863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.109877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.109882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.109888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.109901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.119862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.119941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.119954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.119960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.119969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.119982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.129888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.129967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.129981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.129986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.129992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.130005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.140087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.140201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.140216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.140221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.140227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.140240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.149912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.150033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.150048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.150054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.150059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.150072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.195 [2024-07-12 17:42:57.159907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.195 [2024-07-12 17:42:57.159986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.195 [2024-07-12 17:42:57.160000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.195 [2024-07-12 17:42:57.160006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.195 [2024-07-12 17:42:57.160011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.195 [2024-07-12 17:42:57.160024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.195 qpair failed and we were unable to recover it. 00:32:18.454 [2024-07-12 17:42:57.169991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.454 [2024-07-12 17:42:57.170071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.454 [2024-07-12 17:42:57.170085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.454 [2024-07-12 17:42:57.170091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.454 [2024-07-12 17:42:57.170096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.454 [2024-07-12 17:42:57.170109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.454 qpair failed and we were unable to recover it. 00:32:18.454 [2024-07-12 17:42:57.180206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.454 [2024-07-12 17:42:57.180355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.454 [2024-07-12 17:42:57.180369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.454 [2024-07-12 17:42:57.180375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.454 [2024-07-12 17:42:57.180380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.454 [2024-07-12 17:42:57.180393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.454 qpair failed and we were unable to recover it. 00:32:18.454 [2024-07-12 17:42:57.190076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.454 [2024-07-12 17:42:57.190156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.454 [2024-07-12 17:42:57.190170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.454 [2024-07-12 17:42:57.190175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.454 [2024-07-12 17:42:57.190181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.454 [2024-07-12 17:42:57.190193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.454 qpair failed and we were unable to recover it. 00:32:18.454 [2024-07-12 17:42:57.200056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.454 [2024-07-12 17:42:57.200135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.454 [2024-07-12 17:42:57.200149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.454 [2024-07-12 17:42:57.200155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.454 [2024-07-12 17:42:57.200161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.200175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.210128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.210213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.210226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.210235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.210241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.210262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.220377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.220514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.220529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.220536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.220541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.220554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.230232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.230322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.230337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.230343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.230349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.230362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.240206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.240299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.240314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.240320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.240325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.240338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.250251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.250338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.250352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.250358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.250364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.250377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.260461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.260583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.260597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.260603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.260608] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.260621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.270347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.270447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.270461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.270467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.270473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.270486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.280275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.280362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.280376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.280382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.280387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.280399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.290400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.290485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.290501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.290508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.290513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.290526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.300550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.300652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.300670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.300676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.300682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.300694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.310355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.310437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.310450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.310456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.310461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.310474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.320398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.320473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.320487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.320493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.320498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.320510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.330497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.330582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.330595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.330602] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.330607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.330620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.340751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.340859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.340873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.340879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.340884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.340897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.350511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.350589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.350603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.350609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.350614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.350627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.360616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.360690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.360704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.360710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.360715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.360728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.370643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.370725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.370739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.370745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.370751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.370763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.380797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.380898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.380913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.380919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.380925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.380938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.390718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.390803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.390819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.390825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.390830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.390843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.400777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.400857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.400871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.400876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.400882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.400895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.410795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.410870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.410883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.410889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.410894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.410907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.455 [2024-07-12 17:42:57.420972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.455 [2024-07-12 17:42:57.421073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.455 [2024-07-12 17:42:57.421087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.455 [2024-07-12 17:42:57.421093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.455 [2024-07-12 17:42:57.421098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.455 [2024-07-12 17:42:57.421111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.455 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.430792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.430878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.430891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.430897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.430903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.714 [2024-07-12 17:42:57.430919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.440872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.440952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.440966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.440971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.440977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.714 [2024-07-12 17:42:57.440990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.450930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.451031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.451045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.451051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.451057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.714 [2024-07-12 17:42:57.451070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.461170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.461274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.461288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.461293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.461299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.714 [2024-07-12 17:42:57.461312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.470941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.471032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.471045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.471051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.471056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0944000b90 00:32:18.714 [2024-07-12 17:42:57.471069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.480999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.481122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.481156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.481168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.481179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f094c000b90 00:32:18.714 [2024-07-12 17:42:57.481203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.491039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.491126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.491149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.491159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.491170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f094c000b90 00:32:18.714 [2024-07-12 17:42:57.491192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.501300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.501421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.501448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.501460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.501470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f093c000b90 00:32:18.714 [2024-07-12 17:42:57.501493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.511144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.511241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.511270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.511281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.511290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f093c000b90 00:32:18.714 [2024-07-12 17:42:57.511310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:18.714 qpair failed and we were unable to recover it. 00:32:18.714 [2024-07-12 17:42:57.511475] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:18.714 A controller has encountered a failure and is being reset. 00:32:18.714 [2024-07-12 17:42:57.521191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.714 [2024-07-12 17:42:57.521331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.714 [2024-07-12 17:42:57.521389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.714 [2024-07-12 17:42:57.521424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.714 [2024-07-12 17:42:57.521446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2abe0 00:32:18.715 [2024-07-12 17:42:57.521495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.715 qpair failed and we were unable to recover it. 00:32:18.715 [2024-07-12 17:42:57.531196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.715 [2024-07-12 17:42:57.531316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.715 [2024-07-12 17:42:57.531348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.715 [2024-07-12 17:42:57.531362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.715 [2024-07-12 17:42:57.531375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2abe0 00:32:18.715 [2024-07-12 17:42:57.531403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.715 qpair failed and we were unable to recover it. 00:32:18.715 [2024-07-12 17:42:57.531518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb38690 (9): Bad file descriptor 00:32:18.715 Controller properly reset. 00:32:18.715 Initializing NVMe Controllers 00:32:18.715 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:18.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:18.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:18.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:18.715 Initialization complete. Launching workers. 00:32:18.715 Starting thread on core 1 00:32:18.715 Starting thread on core 2 00:32:18.715 Starting thread on core 3 00:32:18.715 Starting thread on core 0 00:32:18.715 17:42:57 -- host/target_disconnect.sh@59 -- # sync 00:32:18.715 00:32:18.715 real 0m11.492s 00:32:18.715 user 0m21.769s 00:32:18.715 sys 0m4.202s 00:32:18.715 17:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.715 17:42:57 -- common/autotest_common.sh@10 -- # set +x 00:32:18.715 ************************************ 00:32:18.715 END TEST nvmf_target_disconnect_tc2 00:32:18.715 ************************************ 00:32:18.715 17:42:57 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:32:18.715 17:42:57 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:18.715 17:42:57 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:32:18.715 17:42:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:18.715 17:42:57 -- nvmf/common.sh@116 -- # sync 00:32:18.715 17:42:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:18.715 17:42:57 -- nvmf/common.sh@119 -- # set +e 00:32:18.715 17:42:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:18.715 17:42:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:18.715 rmmod nvme_tcp 00:32:18.715 rmmod nvme_fabrics 00:32:18.715 rmmod nvme_keyring 00:32:18.715 17:42:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:18.715 17:42:57 -- nvmf/common.sh@123 -- # set -e 00:32:18.715 17:42:57 -- nvmf/common.sh@124 -- # return 0 00:32:18.715 17:42:57 -- nvmf/common.sh@477 -- # '[' -n 125713 ']' 00:32:18.715 17:42:57 -- nvmf/common.sh@478 -- # killprocess 125713 00:32:18.715 17:42:57 -- common/autotest_common.sh@926 -- # '[' -z 125713 ']' 00:32:18.715 17:42:57 -- common/autotest_common.sh@930 -- # kill -0 125713 00:32:18.715 17:42:57 -- common/autotest_common.sh@931 -- # uname 00:32:18.715 17:42:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:18.715 17:42:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125713 00:32:18.973 17:42:57 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:32:18.973 17:42:57 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:32:18.973 17:42:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125713' 00:32:18.973 killing process with pid 125713 00:32:18.973 17:42:57 -- common/autotest_common.sh@945 -- # kill 125713 00:32:18.973 17:42:57 -- common/autotest_common.sh@950 -- # wait 125713 00:32:19.230 17:42:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:19.230 17:42:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:19.230 17:42:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:19.230 17:42:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.230 17:42:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:19.230 17:42:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.230 17:42:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.230 17:42:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.131 17:43:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:21.131 00:32:21.131 real 0m19.812s 00:32:21.131 user 0m48.966s 00:32:21.131 sys 0m8.933s 00:32:21.131 17:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.131 17:43:00 -- common/autotest_common.sh@10 -- # set +x 00:32:21.131 ************************************ 00:32:21.131 END TEST nvmf_target_disconnect 00:32:21.131 ************************************ 00:32:21.131 17:43:00 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:32:21.131 17:43:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:21.131 17:43:00 -- common/autotest_common.sh@10 -- # set +x 00:32:21.389 17:43:00 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:32:21.389 00:32:21.389 real 25m14.008s 00:32:21.389 user 70m0.236s 00:32:21.389 sys 6m36.509s 00:32:21.389 17:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.389 17:43:00 -- common/autotest_common.sh@10 -- # set +x 00:32:21.389 ************************************ 00:32:21.389 END TEST nvmf_tcp 00:32:21.389 ************************************ 00:32:21.389 17:43:00 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:32:21.389 17:43:00 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:21.389 17:43:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:21.389 17:43:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:21.389 17:43:00 -- common/autotest_common.sh@10 -- # set +x 00:32:21.389 ************************************ 00:32:21.389 START TEST spdkcli_nvmf_tcp 00:32:21.389 ************************************ 00:32:21.389 17:43:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:21.389 * Looking for test storage... 00:32:21.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:21.389 17:43:00 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:21.389 17:43:00 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:21.389 17:43:00 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:21.389 17:43:00 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.389 17:43:00 -- nvmf/common.sh@7 -- # uname -s 00:32:21.390 17:43:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.390 17:43:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.390 17:43:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.390 17:43:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.390 17:43:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.390 17:43:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.390 17:43:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.390 17:43:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.390 17:43:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.390 17:43:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.390 17:43:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:21.390 17:43:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:21.390 17:43:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.390 17:43:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.390 17:43:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.390 17:43:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.390 17:43:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.390 17:43:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.390 17:43:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.390 17:43:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.390 17:43:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.390 17:43:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.390 17:43:00 -- paths/export.sh@5 -- # export PATH 00:32:21.390 17:43:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.390 17:43:00 -- nvmf/common.sh@46 -- # : 0 00:32:21.390 17:43:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:21.390 17:43:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:21.390 17:43:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:21.390 17:43:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.390 17:43:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.390 17:43:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:21.390 17:43:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:21.390 17:43:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:21.390 17:43:00 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:21.390 17:43:00 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:21.390 17:43:00 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:21.390 17:43:00 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:21.390 17:43:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:21.390 17:43:00 -- common/autotest_common.sh@10 -- # set +x 00:32:21.390 17:43:00 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:21.390 17:43:00 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=127436 00:32:21.390 17:43:00 -- spdkcli/common.sh@34 -- # waitforlisten 127436 00:32:21.390 17:43:00 -- common/autotest_common.sh@819 -- # '[' -z 127436 ']' 00:32:21.390 17:43:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.390 17:43:00 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:21.390 17:43:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:21.390 17:43:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.390 17:43:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:21.390 17:43:00 -- common/autotest_common.sh@10 -- # set +x 00:32:21.390 [2024-07-12 17:43:00.322606] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:21.390 [2024-07-12 17:43:00.322668] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127436 ] 00:32:21.390 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.648 [2024-07-12 17:43:00.403681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:21.648 [2024-07-12 17:43:00.446263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:21.648 [2024-07-12 17:43:00.446444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.648 [2024-07-12 17:43:00.446450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.582 17:43:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:22.582 17:43:01 -- common/autotest_common.sh@852 -- # return 0 00:32:22.582 17:43:01 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:22.582 17:43:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:22.582 17:43:01 -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 17:43:01 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:22.582 17:43:01 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:22.582 17:43:01 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:22.582 17:43:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:22.582 17:43:01 -- common/autotest_common.sh@10 -- # set +x 00:32:22.582 17:43:01 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:22.582 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:22.582 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:22.582 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:22.582 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:22.582 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:22.582 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:22.582 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.582 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.582 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:22.582 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:22.582 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:22.582 ' 00:32:22.841 [2024-07-12 17:43:01.692342] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:25.365 [2024-07-12 17:43:03.710515] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.933 [2024-07-12 17:43:04.886858] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:28.501 [2024-07-12 17:43:07.050350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:30.404 [2024-07-12 17:43:08.904722] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:31.780 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:31.780 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:31.780 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:31.780 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:31.780 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:31.780 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:31.780 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:31.780 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:31.780 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:31.780 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:31.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:31.780 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:31.780 17:43:10 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:31.780 17:43:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:31.780 17:43:10 -- common/autotest_common.sh@10 -- # set +x 00:32:31.780 17:43:10 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:31.780 17:43:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:31.780 17:43:10 -- common/autotest_common.sh@10 -- # set +x 00:32:31.780 17:43:10 -- spdkcli/nvmf.sh@69 -- # check_match 00:32:31.780 17:43:10 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:32.039 17:43:10 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:32.039 17:43:10 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:32.039 17:43:10 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:32.039 17:43:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:32.039 17:43:10 -- common/autotest_common.sh@10 -- # set +x 00:32:32.039 17:43:10 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:32.039 17:43:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:32.039 17:43:10 -- common/autotest_common.sh@10 -- # set +x 00:32:32.039 17:43:10 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:32.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:32.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:32.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:32.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:32.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:32.039 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:32.039 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:32.039 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:32.039 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:32.039 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:32.039 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:32.039 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:32.039 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:32.039 ' 00:32:38.605 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:38.605 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:38.605 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:38.605 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:38.605 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:38.605 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:38.605 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:38.605 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:38.605 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:38.605 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:38.605 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:38.605 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:38.605 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:38.605 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:38.605 17:43:16 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:38.605 17:43:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:38.605 17:43:16 -- common/autotest_common.sh@10 -- # set +x 00:32:38.605 17:43:16 -- spdkcli/nvmf.sh@90 -- # killprocess 127436 00:32:38.605 17:43:16 -- common/autotest_common.sh@926 -- # '[' -z 127436 ']' 00:32:38.605 17:43:16 -- common/autotest_common.sh@930 -- # kill -0 127436 00:32:38.605 17:43:16 -- common/autotest_common.sh@931 -- # uname 00:32:38.605 17:43:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:38.605 17:43:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127436 00:32:38.605 17:43:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:38.605 17:43:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:38.605 17:43:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127436' 00:32:38.605 killing process with pid 127436 00:32:38.605 17:43:16 -- common/autotest_common.sh@945 -- # kill 127436 00:32:38.605 [2024-07-12 17:43:16.513192] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:38.605 17:43:16 -- common/autotest_common.sh@950 -- # wait 127436 00:32:38.605 17:43:16 -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:38.605 17:43:16 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:38.605 17:43:16 -- spdkcli/common.sh@13 -- # '[' -n 127436 ']' 00:32:38.605 17:43:16 -- spdkcli/common.sh@14 -- # killprocess 127436 00:32:38.605 17:43:16 -- common/autotest_common.sh@926 -- # '[' -z 127436 ']' 00:32:38.605 17:43:16 -- common/autotest_common.sh@930 -- # kill -0 127436 00:32:38.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (127436) - No such process 00:32:38.605 17:43:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 127436 is not found' 00:32:38.605 Process with pid 127436 is not found 00:32:38.605 17:43:16 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:38.605 17:43:16 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:38.605 17:43:16 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:38.605 00:32:38.605 real 0m16.538s 00:32:38.605 user 0m34.911s 00:32:38.605 sys 0m0.807s 00:32:38.605 17:43:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.605 17:43:16 -- common/autotest_common.sh@10 -- # set +x 00:32:38.605 ************************************ 00:32:38.605 END TEST spdkcli_nvmf_tcp 00:32:38.605 ************************************ 00:32:38.605 17:43:16 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:38.605 17:43:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:38.605 17:43:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:38.605 17:43:16 -- common/autotest_common.sh@10 -- # set +x 00:32:38.605 ************************************ 00:32:38.605 START TEST nvmf_identify_passthru 00:32:38.605 ************************************ 00:32:38.605 17:43:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:38.605 * Looking for test storage... 00:32:38.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.605 17:43:16 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.605 17:43:16 -- nvmf/common.sh@7 -- # uname -s 00:32:38.605 17:43:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.605 17:43:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.605 17:43:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.605 17:43:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.605 17:43:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.605 17:43:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.605 17:43:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.605 17:43:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.605 17:43:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.605 17:43:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.605 17:43:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:38.605 17:43:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:38.605 17:43:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.605 17:43:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.605 17:43:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.605 17:43:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.605 17:43:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.605 17:43:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.605 17:43:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.606 17:43:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.606 17:43:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.606 17:43:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.606 17:43:16 -- paths/export.sh@5 -- # export PATH 00:32:38.606 17:43:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.606 17:43:16 -- nvmf/common.sh@46 -- # : 0 00:32:38.606 17:43:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:38.606 17:43:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:38.606 17:43:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:38.606 17:43:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.606 17:43:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.606 17:43:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:38.606 17:43:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:38.606 17:43:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:38.606 17:43:16 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.606 17:43:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.606 17:43:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.606 17:43:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.606 17:43:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.606 17:43:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.606 17:43:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.606 17:43:16 -- paths/export.sh@5 -- # export PATH 00:32:38.606 17:43:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.606 17:43:16 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:38.606 17:43:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:38.606 17:43:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.606 17:43:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:38.606 17:43:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:38.606 17:43:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:38.606 17:43:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.606 17:43:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:38.606 17:43:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.606 17:43:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:38.606 17:43:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:38.606 17:43:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:38.606 17:43:16 -- common/autotest_common.sh@10 -- # set +x 00:32:43.878 17:43:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:43.878 17:43:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:43.878 17:43:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:43.878 17:43:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:43.878 17:43:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:43.878 17:43:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:43.878 17:43:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:43.878 17:43:22 -- nvmf/common.sh@294 -- # net_devs=() 00:32:43.878 17:43:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:43.878 17:43:22 -- nvmf/common.sh@295 -- # e810=() 00:32:43.878 17:43:22 -- nvmf/common.sh@295 -- # local -ga e810 00:32:43.878 17:43:22 -- nvmf/common.sh@296 -- # x722=() 00:32:43.878 17:43:22 -- nvmf/common.sh@296 -- # local -ga x722 00:32:43.878 17:43:22 -- nvmf/common.sh@297 -- # mlx=() 00:32:43.878 17:43:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:43.878 17:43:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.878 17:43:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:43.878 17:43:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:43.878 17:43:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:43.878 17:43:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:43.878 17:43:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:43.878 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:43.878 17:43:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:43.878 17:43:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:43.878 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:43.878 17:43:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:43.878 17:43:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:43.878 17:43:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.878 17:43:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:43.878 17:43:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.878 17:43:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:43.878 Found net devices under 0000:af:00.0: cvl_0_0 00:32:43.878 17:43:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.878 17:43:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:43.878 17:43:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.878 17:43:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:43.878 17:43:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.878 17:43:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:43.878 Found net devices under 0000:af:00.1: cvl_0_1 00:32:43.878 17:43:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.878 17:43:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:43.878 17:43:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:43.878 17:43:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:43.878 17:43:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.878 17:43:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.878 17:43:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.878 17:43:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:43.878 17:43:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.878 17:43:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.878 17:43:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:43.878 17:43:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.878 17:43:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.878 17:43:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:43.878 17:43:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:43.878 17:43:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.878 17:43:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.878 17:43:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.878 17:43:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.878 17:43:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:43.878 17:43:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.878 17:43:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.878 17:43:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.878 17:43:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:43.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:32:43.878 00:32:43.878 --- 10.0.0.2 ping statistics --- 00:32:43.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.878 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:32:43.878 17:43:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:32:43.878 00:32:43.878 --- 10.0.0.1 ping statistics --- 00:32:43.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.878 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:32:43.878 17:43:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.878 17:43:22 -- nvmf/common.sh@410 -- # return 0 00:32:43.878 17:43:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:43.878 17:43:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.878 17:43:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:43.878 17:43:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.878 17:43:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:43.878 17:43:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:43.878 17:43:22 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:43.878 17:43:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:43.878 17:43:22 -- common/autotest_common.sh@10 -- # set +x 00:32:43.878 17:43:22 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:43.878 17:43:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:43.878 17:43:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:32:43.878 17:43:22 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:43.878 17:43:22 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:43.878 17:43:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:43.878 17:43:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:43.878 17:43:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:43.878 17:43:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:43.878 17:43:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:43.878 17:43:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:43.878 17:43:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:32:43.878 17:43:22 -- common/autotest_common.sh@1512 -- # echo 0000:86:00.0 00:32:43.878 17:43:22 -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:32:43.878 17:43:22 -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:32:43.878 17:43:22 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:43.878 17:43:22 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:32:43.878 17:43:22 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:43.878 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.061 17:43:26 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:32:48.061 17:43:26 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:32:48.061 17:43:26 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:48.061 17:43:26 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:48.061 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.254 17:43:30 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:52.254 17:43:30 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:52.254 17:43:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:52.254 17:43:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.254 17:43:30 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:52.254 17:43:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:52.254 17:43:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.254 17:43:30 -- target/identify_passthru.sh@31 -- # nvmfpid=135085 00:32:52.254 17:43:30 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:52.254 17:43:30 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:52.254 17:43:30 -- target/identify_passthru.sh@35 -- # waitforlisten 135085 00:32:52.254 17:43:30 -- common/autotest_common.sh@819 -- # '[' -z 135085 ']' 00:32:52.254 17:43:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.254 17:43:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:52.254 17:43:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.254 17:43:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:52.254 17:43:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.254 [2024-07-12 17:43:30.956397] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:52.254 [2024-07-12 17:43:30.956454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.254 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.254 [2024-07-12 17:43:31.043207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:52.254 [2024-07-12 17:43:31.085994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:52.254 [2024-07-12 17:43:31.086136] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.254 [2024-07-12 17:43:31.086147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.254 [2024-07-12 17:43:31.086156] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.254 [2024-07-12 17:43:31.086194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.254 [2024-07-12 17:43:31.086302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.254 [2024-07-12 17:43:31.086402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:52.254 [2024-07-12 17:43:31.086405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.254 17:43:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:52.254 17:43:31 -- common/autotest_common.sh@852 -- # return 0 00:32:52.254 17:43:31 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:52.254 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.254 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:32:52.254 INFO: Log level set to 20 00:32:52.254 INFO: Requests: 00:32:52.254 { 00:32:52.254 "jsonrpc": "2.0", 00:32:52.254 "method": "nvmf_set_config", 00:32:52.254 "id": 1, 00:32:52.254 "params": { 00:32:52.254 "admin_cmd_passthru": { 00:32:52.254 "identify_ctrlr": true 00:32:52.254 } 00:32:52.254 } 00:32:52.254 } 00:32:52.254 00:32:52.254 INFO: response: 00:32:52.254 { 00:32:52.254 "jsonrpc": "2.0", 00:32:52.254 "id": 1, 00:32:52.254 "result": true 00:32:52.254 } 00:32:52.254 00:32:52.254 17:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.254 17:43:31 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:52.254 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.254 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:32:52.254 INFO: Setting log level to 20 00:32:52.254 INFO: Setting log level to 20 00:32:52.254 INFO: Log level set to 20 00:32:52.254 INFO: Log level set to 20 00:32:52.254 INFO: Requests: 00:32:52.254 { 00:32:52.254 "jsonrpc": "2.0", 00:32:52.254 "method": "framework_start_init", 00:32:52.254 "id": 1 00:32:52.254 } 00:32:52.254 00:32:52.254 INFO: Requests: 00:32:52.254 { 00:32:52.254 "jsonrpc": "2.0", 00:32:52.254 "method": "framework_start_init", 00:32:52.254 "id": 1 00:32:52.254 } 00:32:52.254 00:32:52.513 [2024-07-12 17:43:31.240236] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:52.513 INFO: response: 00:32:52.513 { 00:32:52.513 "jsonrpc": "2.0", 00:32:52.513 "id": 1, 00:32:52.513 "result": true 00:32:52.513 } 00:32:52.513 00:32:52.513 INFO: response: 00:32:52.513 { 00:32:52.513 "jsonrpc": "2.0", 00:32:52.513 "id": 1, 00:32:52.513 "result": true 00:32:52.513 } 00:32:52.513 00:32:52.513 17:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.513 17:43:31 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:52.513 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.513 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:32:52.513 INFO: Setting log level to 40 00:32:52.513 INFO: Setting log level to 40 00:32:52.513 INFO: Setting log level to 40 00:32:52.513 [2024-07-12 17:43:31.254040] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.513 17:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.513 17:43:31 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:52.513 17:43:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:52.513 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:32:52.513 17:43:31 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:32:52.513 17:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.513 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:32:55.798 Nvme0n1 00:32:55.798 17:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.798 17:43:34 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:55.798 17:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.798 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:32:55.798 17:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.798 17:43:34 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:55.798 17:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.798 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:32:55.798 17:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.798 17:43:34 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.798 17:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.798 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:32:55.798 [2024-07-12 17:43:34.176214] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.798 17:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.798 17:43:34 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:55.798 17:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.798 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:32:55.798 [2024-07-12 17:43:34.183973] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:55.798 [ 00:32:55.799 { 00:32:55.799 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:55.799 "subtype": "Discovery", 00:32:55.799 "listen_addresses": [], 00:32:55.799 "allow_any_host": true, 00:32:55.799 "hosts": [] 00:32:55.799 }, 00:32:55.799 { 00:32:55.799 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:55.799 "subtype": "NVMe", 00:32:55.799 "listen_addresses": [ 00:32:55.799 { 00:32:55.799 "transport": "TCP", 00:32:55.799 "trtype": "TCP", 00:32:55.799 "adrfam": "IPv4", 00:32:55.799 "traddr": "10.0.0.2", 00:32:55.799 "trsvcid": "4420" 00:32:55.799 } 00:32:55.799 ], 00:32:55.799 "allow_any_host": true, 00:32:55.799 "hosts": [], 00:32:55.799 "serial_number": "SPDK00000000000001", 00:32:55.799 "model_number": "SPDK bdev Controller", 00:32:55.799 "max_namespaces": 1, 00:32:55.799 "min_cntlid": 1, 00:32:55.799 "max_cntlid": 65519, 00:32:55.799 "namespaces": [ 00:32:55.799 { 00:32:55.799 "nsid": 1, 00:32:55.799 "bdev_name": "Nvme0n1", 00:32:55.799 "name": "Nvme0n1", 00:32:55.799 "nguid": "0F3CC8A99016478A975FCB421C088250", 00:32:55.799 "uuid": "0f3cc8a9-9016-478a-975f-cb421c088250" 00:32:55.799 } 00:32:55.799 ] 00:32:55.799 } 00:32:55.799 ] 00:32:55.799 17:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.799 17:43:34 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:55.799 17:43:34 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:55.799 17:43:34 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:55.799 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.799 17:43:34 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:32:55.799 17:43:34 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:55.799 17:43:34 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:55.799 17:43:34 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:55.799 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.799 17:43:34 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:55.799 17:43:34 -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:32:55.799 17:43:34 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:55.799 17:43:34 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:55.799 17:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.799 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:32:55.799 17:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.799 17:43:34 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:55.799 17:43:34 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:55.799 17:43:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:55.799 17:43:34 -- nvmf/common.sh@116 -- # sync 00:32:55.799 17:43:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:55.799 17:43:34 -- nvmf/common.sh@119 -- # set +e 00:32:55.799 17:43:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:55.799 17:43:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:55.799 rmmod nvme_tcp 00:32:56.057 rmmod nvme_fabrics 00:32:56.057 rmmod nvme_keyring 00:32:56.057 17:43:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:56.057 17:43:34 -- nvmf/common.sh@123 -- # set -e 00:32:56.057 17:43:34 -- nvmf/common.sh@124 -- # return 0 00:32:56.057 17:43:34 -- nvmf/common.sh@477 -- # '[' -n 135085 ']' 00:32:56.057 17:43:34 -- nvmf/common.sh@478 -- # killprocess 135085 00:32:56.057 17:43:34 -- common/autotest_common.sh@926 -- # '[' -z 135085 ']' 00:32:56.057 17:43:34 -- common/autotest_common.sh@930 -- # kill -0 135085 00:32:56.057 17:43:34 -- common/autotest_common.sh@931 -- # uname 00:32:56.057 17:43:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:56.057 17:43:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135085 00:32:56.057 17:43:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:56.057 17:43:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:56.057 17:43:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135085' 00:32:56.057 killing process with pid 135085 00:32:56.058 17:43:34 -- common/autotest_common.sh@945 -- # kill 135085 00:32:56.058 [2024-07-12 17:43:34.857228] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:56.058 17:43:34 -- common/autotest_common.sh@950 -- # wait 135085 00:32:57.435 17:43:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:57.435 17:43:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:57.435 17:43:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:57.435 17:43:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:57.435 17:43:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:57.435 17:43:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.435 17:43:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:57.435 17:43:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.972 17:43:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:59.972 00:32:59.972 real 0m21.709s 00:32:59.972 user 0m28.477s 00:32:59.972 sys 0m5.017s 00:32:59.972 17:43:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.972 17:43:38 -- common/autotest_common.sh@10 -- # set +x 00:32:59.972 ************************************ 00:32:59.972 END TEST nvmf_identify_passthru 00:32:59.972 ************************************ 00:32:59.972 17:43:38 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:59.972 17:43:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:59.972 17:43:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:59.972 17:43:38 -- common/autotest_common.sh@10 -- # set +x 00:32:59.972 ************************************ 00:32:59.972 START TEST nvmf_dif 00:32:59.972 ************************************ 00:32:59.972 17:43:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:59.972 * Looking for test storage... 00:32:59.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:59.972 17:43:38 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.972 17:43:38 -- nvmf/common.sh@7 -- # uname -s 00:32:59.972 17:43:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.972 17:43:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.972 17:43:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.972 17:43:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.972 17:43:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.972 17:43:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.972 17:43:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.972 17:43:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.972 17:43:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.972 17:43:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.972 17:43:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:59.972 17:43:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:59.972 17:43:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.972 17:43:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.972 17:43:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.972 17:43:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.972 17:43:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.972 17:43:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.972 17:43:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.972 17:43:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.972 17:43:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.972 17:43:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.972 17:43:38 -- paths/export.sh@5 -- # export PATH 00:32:59.972 17:43:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.972 17:43:38 -- nvmf/common.sh@46 -- # : 0 00:32:59.972 17:43:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:59.972 17:43:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:59.972 17:43:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:59.972 17:43:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.972 17:43:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.972 17:43:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:59.972 17:43:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:59.972 17:43:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:59.972 17:43:38 -- target/dif.sh@15 -- # NULL_META=16 00:32:59.972 17:43:38 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:59.972 17:43:38 -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:59.972 17:43:38 -- target/dif.sh@15 -- # NULL_DIF=1 00:32:59.972 17:43:38 -- target/dif.sh@135 -- # nvmftestinit 00:32:59.972 17:43:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:59.972 17:43:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.972 17:43:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:59.972 17:43:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:59.972 17:43:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:59.972 17:43:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.972 17:43:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:59.973 17:43:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.973 17:43:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:59.973 17:43:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:59.973 17:43:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:59.973 17:43:38 -- common/autotest_common.sh@10 -- # set +x 00:33:05.245 17:43:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:05.245 17:43:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:05.245 17:43:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:05.245 17:43:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:05.245 17:43:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:05.245 17:43:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:05.245 17:43:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:05.245 17:43:43 -- nvmf/common.sh@294 -- # net_devs=() 00:33:05.245 17:43:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:05.245 17:43:43 -- nvmf/common.sh@295 -- # e810=() 00:33:05.245 17:43:43 -- nvmf/common.sh@295 -- # local -ga e810 00:33:05.245 17:43:43 -- nvmf/common.sh@296 -- # x722=() 00:33:05.245 17:43:43 -- nvmf/common.sh@296 -- # local -ga x722 00:33:05.245 17:43:43 -- nvmf/common.sh@297 -- # mlx=() 00:33:05.245 17:43:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:05.245 17:43:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.245 17:43:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:05.245 17:43:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:05.245 17:43:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:05.245 17:43:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:05.245 17:43:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:05.245 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:05.245 17:43:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:05.245 17:43:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:05.245 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:05.245 17:43:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:05.245 17:43:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:05.245 17:43:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.245 17:43:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:05.245 17:43:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.245 17:43:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:05.245 Found net devices under 0000:af:00.0: cvl_0_0 00:33:05.245 17:43:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.245 17:43:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:05.245 17:43:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.245 17:43:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:05.245 17:43:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.245 17:43:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:05.245 Found net devices under 0000:af:00.1: cvl_0_1 00:33:05.245 17:43:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.245 17:43:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:05.245 17:43:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:05.245 17:43:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:05.245 17:43:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:05.245 17:43:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.245 17:43:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.245 17:43:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.245 17:43:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:05.245 17:43:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.245 17:43:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.245 17:43:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:05.245 17:43:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.245 17:43:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.245 17:43:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:05.245 17:43:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:05.245 17:43:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.245 17:43:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.245 17:43:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.245 17:43:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.245 17:43:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:05.245 17:43:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.245 17:43:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.245 17:43:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.245 17:43:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:05.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:33:05.245 00:33:05.245 --- 10.0.0.2 ping statistics --- 00:33:05.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.245 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:33:05.245 17:43:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:33:05.245 00:33:05.245 --- 10.0.0.1 ping statistics --- 00:33:05.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.245 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:33:05.245 17:43:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.245 17:43:43 -- nvmf/common.sh@410 -- # return 0 00:33:05.245 17:43:43 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:05.245 17:43:43 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:07.782 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:07.782 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:07.782 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:07.782 17:43:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.782 17:43:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:07.782 17:43:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:07.782 17:43:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.782 17:43:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:07.782 17:43:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:07.782 17:43:46 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:07.782 17:43:46 -- target/dif.sh@137 -- # nvmfappstart 00:33:07.782 17:43:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:07.782 17:43:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:07.782 17:43:46 -- common/autotest_common.sh@10 -- # set +x 00:33:07.782 17:43:46 -- nvmf/common.sh@469 -- # nvmfpid=140719 00:33:07.782 17:43:46 -- nvmf/common.sh@470 -- # waitforlisten 140719 00:33:07.782 17:43:46 -- common/autotest_common.sh@819 -- # '[' -z 140719 ']' 00:33:07.782 17:43:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.782 17:43:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:07.782 17:43:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.782 17:43:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:07.782 17:43:46 -- common/autotest_common.sh@10 -- # set +x 00:33:07.782 17:43:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:07.782 [2024-07-12 17:43:46.384344] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:07.782 [2024-07-12 17:43:46.384399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.782 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.782 [2024-07-12 17:43:46.471061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.782 [2024-07-12 17:43:46.512762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:07.782 [2024-07-12 17:43:46.512905] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.782 [2024-07-12 17:43:46.512916] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.782 [2024-07-12 17:43:46.512925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.782 [2024-07-12 17:43:46.512952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.352 17:43:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:08.352 17:43:47 -- common/autotest_common.sh@852 -- # return 0 00:33:08.352 17:43:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:08.352 17:43:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:08.352 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:33:08.611 17:43:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.611 17:43:47 -- target/dif.sh@139 -- # create_transport 00:33:08.611 17:43:47 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:08.611 17:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:08.611 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:33:08.611 [2024-07-12 17:43:47.332938] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.611 17:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:08.611 17:43:47 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:08.611 17:43:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:08.611 17:43:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:08.611 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:33:08.611 ************************************ 00:33:08.611 START TEST fio_dif_1_default 00:33:08.611 ************************************ 00:33:08.611 17:43:47 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:33:08.611 17:43:47 -- target/dif.sh@86 -- # create_subsystems 0 00:33:08.611 17:43:47 -- target/dif.sh@28 -- # local sub 00:33:08.611 17:43:47 -- target/dif.sh@30 -- # for sub in "$@" 00:33:08.611 17:43:47 -- target/dif.sh@31 -- # create_subsystem 0 00:33:08.611 17:43:47 -- target/dif.sh@18 -- # local sub_id=0 00:33:08.611 17:43:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:08.611 17:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:08.611 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:33:08.611 bdev_null0 00:33:08.611 17:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:08.611 17:43:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:08.611 17:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:08.611 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:33:08.611 17:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:08.612 17:43:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:08.612 17:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:08.612 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:33:08.612 17:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:08.612 17:43:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:08.612 17:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:08.612 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:33:08.612 [2024-07-12 17:43:47.373187] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.612 17:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:08.612 17:43:47 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:08.612 17:43:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:08.612 17:43:47 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:08.612 17:43:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:08.612 17:43:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:08.612 17:43:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:08.612 17:43:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:08.612 17:43:47 -- common/autotest_common.sh@1320 -- # shift 00:33:08.612 17:43:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:08.612 17:43:47 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:08.612 17:43:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:08.612 17:43:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:08.612 17:43:47 -- target/dif.sh@82 -- # gen_fio_conf 00:33:08.612 17:43:47 -- nvmf/common.sh@520 -- # config=() 00:33:08.612 17:43:47 -- target/dif.sh@54 -- # local file 00:33:08.612 17:43:47 -- nvmf/common.sh@520 -- # local subsystem config 00:33:08.612 17:43:47 -- target/dif.sh@56 -- # cat 00:33:08.612 17:43:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:08.612 17:43:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:08.612 { 00:33:08.612 "params": { 00:33:08.612 "name": "Nvme$subsystem", 00:33:08.612 "trtype": "$TEST_TRANSPORT", 00:33:08.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.612 "adrfam": "ipv4", 00:33:08.612 "trsvcid": "$NVMF_PORT", 00:33:08.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.612 "hdgst": ${hdgst:-false}, 00:33:08.612 "ddgst": ${ddgst:-false} 00:33:08.612 }, 00:33:08.612 "method": "bdev_nvme_attach_controller" 00:33:08.612 } 00:33:08.612 EOF 00:33:08.612 )") 00:33:08.612 17:43:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:08.612 17:43:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:08.612 17:43:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:08.612 17:43:47 -- nvmf/common.sh@542 -- # cat 00:33:08.612 17:43:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:08.612 17:43:47 -- target/dif.sh@72 -- # (( file <= files )) 00:33:08.612 17:43:47 -- nvmf/common.sh@544 -- # jq . 00:33:08.612 17:43:47 -- nvmf/common.sh@545 -- # IFS=, 00:33:08.612 17:43:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:08.612 "params": { 00:33:08.612 "name": "Nvme0", 00:33:08.612 "trtype": "tcp", 00:33:08.612 "traddr": "10.0.0.2", 00:33:08.612 "adrfam": "ipv4", 00:33:08.612 "trsvcid": "4420", 00:33:08.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:08.612 "hdgst": false, 00:33:08.612 "ddgst": false 00:33:08.612 }, 00:33:08.612 "method": "bdev_nvme_attach_controller" 00:33:08.612 }' 00:33:08.612 17:43:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:08.612 17:43:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:08.612 17:43:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:08.612 17:43:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:08.612 17:43:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:08.612 17:43:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:08.612 17:43:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:08.612 17:43:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:08.612 17:43:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:08.612 17:43:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:08.871 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:08.871 fio-3.35 00:33:08.871 Starting 1 thread 00:33:09.130 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.389 [2024-07-12 17:43:48.155270] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:09.389 [2024-07-12 17:43:48.155327] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:19.367 00:33:19.367 filename0: (groupid=0, jobs=1): err= 0: pid=141142: Fri Jul 12 17:43:58 2024 00:33:19.367 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10036msec) 00:33:19.367 slat (nsec): min=4250, max=27706, avg=9357.03, stdev=980.57 00:33:19.367 clat (usec): min=40780, max=45774, avg=41104.62, stdev=434.16 00:33:19.367 lat (usec): min=40789, max=45786, avg=41113.98, stdev=434.02 00:33:19.367 clat percentiles (usec): 00:33:19.367 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:19.367 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:19.367 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:33:19.367 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:33:19.367 | 99.99th=[45876] 00:33:19.367 bw ( KiB/s): min= 384, max= 416, per=99.74%, avg=388.80, stdev=11.72, samples=20 00:33:19.367 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:19.367 lat (msec) : 50=100.00% 00:33:19.367 cpu : usr=95.04%, sys=4.65%, ctx=18, majf=0, minf=233 00:33:19.367 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:19.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:19.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:19.367 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:19.367 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:19.367 00:33:19.367 Run status group 0 (all jobs): 00:33:19.367 READ: bw=389KiB/s (398kB/s), 389KiB/s-389KiB/s (398kB/s-398kB/s), io=3904KiB (3998kB), run=10036-10036msec 00:33:19.626 17:43:58 -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:19.626 17:43:58 -- target/dif.sh@43 -- # local sub 00:33:19.626 17:43:58 -- target/dif.sh@45 -- # for sub in "$@" 00:33:19.626 17:43:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:19.626 17:43:58 -- target/dif.sh@36 -- # local sub_id=0 00:33:19.626 17:43:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 00:33:19.626 real 0m11.129s 00:33:19.626 user 0m21.052s 00:33:19.626 sys 0m0.775s 00:33:19.626 17:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 ************************************ 00:33:19.626 END TEST fio_dif_1_default 00:33:19.626 ************************************ 00:33:19.626 17:43:58 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:19.626 17:43:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:19.626 17:43:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 ************************************ 00:33:19.626 START TEST fio_dif_1_multi_subsystems 00:33:19.626 ************************************ 00:33:19.626 17:43:58 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:33:19.626 17:43:58 -- target/dif.sh@92 -- # local files=1 00:33:19.626 17:43:58 -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:19.626 17:43:58 -- target/dif.sh@28 -- # local sub 00:33:19.626 17:43:58 -- target/dif.sh@30 -- # for sub in "$@" 00:33:19.626 17:43:58 -- target/dif.sh@31 -- # create_subsystem 0 00:33:19.626 17:43:58 -- target/dif.sh@18 -- # local sub_id=0 00:33:19.626 17:43:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 bdev_null0 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 [2024-07-12 17:43:58.543440] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@30 -- # for sub in "$@" 00:33:19.626 17:43:58 -- target/dif.sh@31 -- # create_subsystem 1 00:33:19.626 17:43:58 -- target/dif.sh@18 -- # local sub_id=1 00:33:19.626 17:43:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 bdev_null1 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:19.626 17:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.626 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:33:19.626 17:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.626 17:43:58 -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:19.626 17:43:58 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:19.626 17:43:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:19.626 17:43:58 -- nvmf/common.sh@520 -- # config=() 00:33:19.626 17:43:58 -- nvmf/common.sh@520 -- # local subsystem config 00:33:19.626 17:43:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:19.626 17:43:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:19.626 { 00:33:19.626 "params": { 00:33:19.626 "name": "Nvme$subsystem", 00:33:19.626 "trtype": "$TEST_TRANSPORT", 00:33:19.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:19.626 "adrfam": "ipv4", 00:33:19.626 "trsvcid": "$NVMF_PORT", 00:33:19.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:19.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:19.626 "hdgst": ${hdgst:-false}, 00:33:19.626 "ddgst": ${ddgst:-false} 00:33:19.626 }, 00:33:19.626 "method": "bdev_nvme_attach_controller" 00:33:19.626 } 00:33:19.626 EOF 00:33:19.626 )") 00:33:19.626 17:43:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:19.626 17:43:58 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:19.626 17:43:58 -- target/dif.sh@82 -- # gen_fio_conf 00:33:19.626 17:43:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:19.626 17:43:58 -- target/dif.sh@54 -- # local file 00:33:19.627 17:43:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:19.627 17:43:58 -- target/dif.sh@56 -- # cat 00:33:19.627 17:43:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:19.627 17:43:58 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:19.627 17:43:58 -- common/autotest_common.sh@1320 -- # shift 00:33:19.627 17:43:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:19.627 17:43:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:19.627 17:43:58 -- nvmf/common.sh@542 -- # cat 00:33:19.627 17:43:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:19.627 17:43:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:19.627 17:43:58 -- target/dif.sh@72 -- # (( file <= files )) 00:33:19.627 17:43:58 -- target/dif.sh@73 -- # cat 00:33:19.627 17:43:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:19.627 17:43:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:19.627 17:43:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:19.627 17:43:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:19.627 { 00:33:19.627 "params": { 00:33:19.627 "name": "Nvme$subsystem", 00:33:19.627 "trtype": "$TEST_TRANSPORT", 00:33:19.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:19.627 "adrfam": "ipv4", 00:33:19.627 "trsvcid": "$NVMF_PORT", 00:33:19.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:19.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:19.627 "hdgst": ${hdgst:-false}, 00:33:19.627 "ddgst": ${ddgst:-false} 00:33:19.627 }, 00:33:19.627 "method": "bdev_nvme_attach_controller" 00:33:19.627 } 00:33:19.627 EOF 00:33:19.627 )") 00:33:19.627 17:43:58 -- target/dif.sh@72 -- # (( file++ )) 00:33:19.627 17:43:58 -- target/dif.sh@72 -- # (( file <= files )) 00:33:19.627 17:43:58 -- nvmf/common.sh@542 -- # cat 00:33:19.885 17:43:58 -- nvmf/common.sh@544 -- # jq . 00:33:19.885 17:43:58 -- nvmf/common.sh@545 -- # IFS=, 00:33:19.885 17:43:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:19.885 "params": { 00:33:19.885 "name": "Nvme0", 00:33:19.885 "trtype": "tcp", 00:33:19.885 "traddr": "10.0.0.2", 00:33:19.885 "adrfam": "ipv4", 00:33:19.885 "trsvcid": "4420", 00:33:19.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:19.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:19.885 "hdgst": false, 00:33:19.885 "ddgst": false 00:33:19.885 }, 00:33:19.885 "method": "bdev_nvme_attach_controller" 00:33:19.885 },{ 00:33:19.885 "params": { 00:33:19.885 "name": "Nvme1", 00:33:19.885 "trtype": "tcp", 00:33:19.885 "traddr": "10.0.0.2", 00:33:19.885 "adrfam": "ipv4", 00:33:19.885 "trsvcid": "4420", 00:33:19.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:19.885 "hdgst": false, 00:33:19.885 "ddgst": false 00:33:19.885 }, 00:33:19.885 "method": "bdev_nvme_attach_controller" 00:33:19.885 }' 00:33:19.885 17:43:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:19.885 17:43:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:19.885 17:43:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:19.885 17:43:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:19.885 17:43:58 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:19.885 17:43:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:19.885 17:43:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:19.885 17:43:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:19.885 17:43:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:19.885 17:43:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.144 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:20.144 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:20.144 fio-3.35 00:33:20.144 Starting 2 threads 00:33:20.144 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.088 [2024-07-12 17:43:59.709920] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:21.088 [2024-07-12 17:43:59.709972] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:31.068 00:33:31.068 filename0: (groupid=0, jobs=1): err= 0: pid=143417: Fri Jul 12 17:44:09 2024 00:33:31.068 read: IOPS=97, BW=388KiB/s (398kB/s)(3888KiB/10012msec) 00:33:31.068 slat (nsec): min=4402, max=62178, avg=11428.63, stdev=3812.16 00:33:31.068 clat (usec): min=40796, max=47961, avg=41167.22, stdev=573.67 00:33:31.068 lat (usec): min=40806, max=47980, avg=41178.64, stdev=573.64 00:33:31.068 clat percentiles (usec): 00:33:31.068 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:31.068 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:31.068 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:33:31.068 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:33:31.068 | 99.99th=[47973] 00:33:31.068 bw ( KiB/s): min= 384, max= 416, per=49.93%, avg=387.20, stdev= 9.85, samples=20 00:33:31.068 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:33:31.068 lat (msec) : 50=100.00% 00:33:31.068 cpu : usr=97.94%, sys=1.76%, ctx=15, majf=0, minf=173 00:33:31.068 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:31.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.068 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.068 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:31.068 filename1: (groupid=0, jobs=1): err= 0: pid=143418: Fri Jul 12 17:44:09 2024 00:33:31.068 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10010msec) 00:33:31.068 slat (nsec): min=9447, max=33499, avg=22198.80, stdev=3147.28 00:33:31.068 clat (usec): min=712, max=45534, avg=41298.79, stdev=2670.88 00:33:31.068 lat (usec): min=733, max=45560, avg=41320.99, stdev=2670.87 00:33:31.068 clat percentiles (usec): 00:33:31.068 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:31.068 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:33:31.068 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:31.068 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:33:31.068 | 99.99th=[45351] 00:33:31.068 bw ( KiB/s): min= 352, max= 416, per=49.67%, avg=385.60, stdev=12.61, samples=20 00:33:31.068 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:33:31.068 lat (usec) : 750=0.21%, 1000=0.21% 00:33:31.068 lat (msec) : 50=99.59% 00:33:31.068 cpu : usr=97.34%, sys=2.19%, ctx=14, majf=0, minf=134 00:33:31.068 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:31.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.068 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.068 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:31.068 00:33:31.068 Run status group 0 (all jobs): 00:33:31.068 READ: bw=775KiB/s (794kB/s), 387KiB/s-388KiB/s (396kB/s-398kB/s), io=7760KiB (7946kB), run=10010-10012msec 00:33:31.068 17:44:09 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:31.068 17:44:09 -- target/dif.sh@43 -- # local sub 00:33:31.068 17:44:09 -- target/dif.sh@45 -- # for sub in "$@" 00:33:31.068 17:44:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:31.068 17:44:09 -- target/dif.sh@36 -- # local sub_id=0 00:33:31.068 17:44:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:31.068 17:44:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.068 17:44:09 -- common/autotest_common.sh@10 -- # set +x 00:33:31.068 17:44:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.068 17:44:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:31.068 17:44:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.068 17:44:09 -- common/autotest_common.sh@10 -- # set +x 00:33:31.068 17:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.068 17:44:10 -- target/dif.sh@45 -- # for sub in "$@" 00:33:31.068 17:44:10 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:31.068 17:44:10 -- target/dif.sh@36 -- # local sub_id=1 00:33:31.068 17:44:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:31.068 17:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.068 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:33:31.068 17:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.068 17:44:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:31.068 17:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.068 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:33:31.068 17:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.068 00:33:31.068 real 0m11.508s 00:33:31.068 user 0m31.264s 00:33:31.068 sys 0m0.809s 00:33:31.068 17:44:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.068 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:33:31.068 ************************************ 00:33:31.068 END TEST fio_dif_1_multi_subsystems 00:33:31.068 ************************************ 00:33:31.327 17:44:10 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:31.327 17:44:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:31.327 17:44:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:31.327 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:33:31.327 ************************************ 00:33:31.327 START TEST fio_dif_rand_params 00:33:31.327 ************************************ 00:33:31.327 17:44:10 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:33:31.327 17:44:10 -- target/dif.sh@100 -- # local NULL_DIF 00:33:31.327 17:44:10 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:31.327 17:44:10 -- target/dif.sh@103 -- # NULL_DIF=3 00:33:31.327 17:44:10 -- target/dif.sh@103 -- # bs=128k 00:33:31.327 17:44:10 -- target/dif.sh@103 -- # numjobs=3 00:33:31.327 17:44:10 -- target/dif.sh@103 -- # iodepth=3 00:33:31.327 17:44:10 -- target/dif.sh@103 -- # runtime=5 00:33:31.327 17:44:10 -- target/dif.sh@105 -- # create_subsystems 0 00:33:31.327 17:44:10 -- target/dif.sh@28 -- # local sub 00:33:31.327 17:44:10 -- target/dif.sh@30 -- # for sub in "$@" 00:33:31.327 17:44:10 -- target/dif.sh@31 -- # create_subsystem 0 00:33:31.327 17:44:10 -- target/dif.sh@18 -- # local sub_id=0 00:33:31.327 17:44:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:31.327 17:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.327 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:33:31.327 bdev_null0 00:33:31.327 17:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.327 17:44:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:31.327 17:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.327 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:33:31.327 17:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.327 17:44:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:31.327 17:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.327 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:33:31.327 17:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.327 17:44:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:31.327 17:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.327 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:33:31.327 [2024-07-12 17:44:10.094295] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.327 17:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.327 17:44:10 -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:31.327 17:44:10 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:31.327 17:44:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:31.327 17:44:10 -- nvmf/common.sh@520 -- # config=() 00:33:31.327 17:44:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:31.327 17:44:10 -- nvmf/common.sh@520 -- # local subsystem config 00:33:31.327 17:44:10 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:31.327 17:44:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:31.327 17:44:10 -- target/dif.sh@82 -- # gen_fio_conf 00:33:31.327 17:44:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:31.327 { 00:33:31.327 "params": { 00:33:31.327 "name": "Nvme$subsystem", 00:33:31.327 "trtype": "$TEST_TRANSPORT", 00:33:31.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.327 "adrfam": "ipv4", 00:33:31.327 "trsvcid": "$NVMF_PORT", 00:33:31.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.327 "hdgst": ${hdgst:-false}, 00:33:31.327 "ddgst": ${ddgst:-false} 00:33:31.327 }, 00:33:31.327 "method": "bdev_nvme_attach_controller" 00:33:31.327 } 00:33:31.327 EOF 00:33:31.327 )") 00:33:31.327 17:44:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:31.327 17:44:10 -- target/dif.sh@54 -- # local file 00:33:31.327 17:44:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:31.327 17:44:10 -- target/dif.sh@56 -- # cat 00:33:31.327 17:44:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:31.327 17:44:10 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:31.327 17:44:10 -- common/autotest_common.sh@1320 -- # shift 00:33:31.327 17:44:10 -- nvmf/common.sh@542 -- # cat 00:33:31.327 17:44:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:31.327 17:44:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.327 17:44:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:31.327 17:44:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:31.327 17:44:10 -- target/dif.sh@72 -- # (( file <= files )) 00:33:31.327 17:44:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:31.327 17:44:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:31.327 17:44:10 -- nvmf/common.sh@544 -- # jq . 00:33:31.327 17:44:10 -- nvmf/common.sh@545 -- # IFS=, 00:33:31.327 17:44:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:31.327 "params": { 00:33:31.327 "name": "Nvme0", 00:33:31.327 "trtype": "tcp", 00:33:31.327 "traddr": "10.0.0.2", 00:33:31.327 "adrfam": "ipv4", 00:33:31.327 "trsvcid": "4420", 00:33:31.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:31.327 "hdgst": false, 00:33:31.327 "ddgst": false 00:33:31.327 }, 00:33:31.327 "method": "bdev_nvme_attach_controller" 00:33:31.327 }' 00:33:31.327 17:44:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:31.327 17:44:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:31.327 17:44:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.327 17:44:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:31.327 17:44:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:31.327 17:44:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:31.327 17:44:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:31.327 17:44:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:31.327 17:44:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:31.327 17:44:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:31.586 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:31.586 ... 00:33:31.586 fio-3.35 00:33:31.586 Starting 3 threads 00:33:31.586 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.152 [2024-07-12 17:44:11.047904] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:32.152 [2024-07-12 17:44:11.047954] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:37.445 00:33:37.445 filename0: (groupid=0, jobs=1): err= 0: pid=145437: Fri Jul 12 17:44:16 2024 00:33:37.445 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(134MiB/5047msec) 00:33:37.445 slat (nsec): min=4300, max=42163, avg=14527.30, stdev=3341.24 00:33:37.445 clat (usec): min=6347, max=55559, avg=14026.29, stdev=7222.09 00:33:37.445 lat (usec): min=6357, max=55576, avg=14040.81, stdev=7221.94 00:33:37.445 clat percentiles (usec): 00:33:37.445 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10683], 20.00th=[11338], 00:33:37.445 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:33:37.445 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15008], 95.00th=[15926], 00:33:37.445 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:33:37.445 | 99.99th=[55313] 00:33:37.445 bw ( KiB/s): min=17920, max=32000, per=33.26%, avg=27443.20, stdev=4099.20, samples=10 00:33:37.445 iops : min= 140, max= 250, avg=214.40, stdev=32.02, samples=10 00:33:37.445 lat (msec) : 10=2.70%, 20=93.77%, 50=0.65%, 100=2.88% 00:33:37.445 cpu : usr=95.48%, sys=4.16%, ctx=11, majf=0, minf=42 00:33:37.445 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:37.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.445 issued rwts: total=1075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.445 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:37.445 filename0: (groupid=0, jobs=1): err= 0: pid=145438: Fri Jul 12 17:44:16 2024 00:33:37.445 read: IOPS=217, BW=27.1MiB/s (28.4MB/s)(137MiB/5045msec) 00:33:37.445 slat (nsec): min=9435, max=45435, avg=27915.21, stdev=2555.95 00:33:37.445 clat (usec): min=6175, max=52478, avg=13751.51, stdev=3062.89 00:33:37.445 lat (usec): min=6196, max=52506, avg=13779.42, stdev=3063.13 00:33:37.445 clat percentiles (usec): 00:33:37.445 | 1.00th=[ 7701], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[12649], 00:33:37.445 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14222], 00:33:37.445 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[16188], 00:33:37.445 | 99.00th=[17695], 99.50th=[17957], 99.90th=[48497], 99.95th=[52691], 00:33:37.445 | 99.99th=[52691] 00:33:37.445 bw ( KiB/s): min=26112, max=30720, per=33.88%, avg=27955.20, stdev=1501.48, samples=10 00:33:37.445 iops : min= 204, max= 240, avg=218.40, stdev=11.73, samples=10 00:33:37.445 lat (msec) : 10=8.49%, 20=91.05%, 50=0.37%, 100=0.09% 00:33:37.445 cpu : usr=93.77%, sys=5.65%, ctx=5, majf=0, minf=93 00:33:37.445 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:37.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.445 issued rwts: total=1095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.445 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:37.445 filename0: (groupid=0, jobs=1): err= 0: pid=145439: Fri Jul 12 17:44:16 2024 00:33:37.445 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(135MiB/5003msec) 00:33:37.445 slat (nsec): min=9299, max=58257, avg=16019.25, stdev=2787.22 00:33:37.445 clat (usec): min=5346, max=54356, avg=13839.70, stdev=3783.44 00:33:37.445 lat (usec): min=5356, max=54377, avg=13855.71, stdev=3783.61 00:33:37.445 clat percentiles (usec): 00:33:37.445 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11994], 00:33:37.445 | 30.00th=[12780], 40.00th=[13566], 50.00th=[14091], 60.00th=[14484], 00:33:37.445 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:33:37.445 | 99.00th=[18482], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:33:37.445 | 99.99th=[54264] 00:33:37.445 bw ( KiB/s): min=25856, max=30208, per=33.54%, avg=27668.00, stdev=1398.40, samples=10 00:33:37.445 iops : min= 202, max= 236, avg=216.10, stdev=10.92, samples=10 00:33:37.445 lat (msec) : 10=9.70%, 20=89.75%, 100=0.55% 00:33:37.445 cpu : usr=95.78%, sys=3.88%, ctx=9, majf=0, minf=158 00:33:37.445 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:37.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.445 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.445 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:37.445 00:33:37.445 Run status group 0 (all jobs): 00:33:37.445 READ: bw=80.6MiB/s (84.5MB/s), 26.6MiB/s-27.1MiB/s (27.9MB/s-28.4MB/s), io=407MiB (426MB), run=5003-5047msec 00:33:37.445 17:44:16 -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:37.445 17:44:16 -- target/dif.sh@43 -- # local sub 00:33:37.445 17:44:16 -- target/dif.sh@45 -- # for sub in "$@" 00:33:37.445 17:44:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:37.445 17:44:16 -- target/dif.sh@36 -- # local sub_id=0 00:33:37.445 17:44:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:37.445 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.445 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.445 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.445 17:44:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:37.445 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.445 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.445 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.445 17:44:16 -- target/dif.sh@109 -- # NULL_DIF=2 00:33:37.445 17:44:16 -- target/dif.sh@109 -- # bs=4k 00:33:37.445 17:44:16 -- target/dif.sh@109 -- # numjobs=8 00:33:37.445 17:44:16 -- target/dif.sh@109 -- # iodepth=16 00:33:37.445 17:44:16 -- target/dif.sh@109 -- # runtime= 00:33:37.445 17:44:16 -- target/dif.sh@109 -- # files=2 00:33:37.445 17:44:16 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:37.445 17:44:16 -- target/dif.sh@28 -- # local sub 00:33:37.445 17:44:16 -- target/dif.sh@30 -- # for sub in "$@" 00:33:37.445 17:44:16 -- target/dif.sh@31 -- # create_subsystem 0 00:33:37.445 17:44:16 -- target/dif.sh@18 -- # local sub_id=0 00:33:37.445 17:44:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:37.445 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.445 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.445 bdev_null0 00:33:37.445 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.445 17:44:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:37.445 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.445 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 [2024-07-12 17:44:16.426988] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@30 -- # for sub in "$@" 00:33:37.718 17:44:16 -- target/dif.sh@31 -- # create_subsystem 1 00:33:37.718 17:44:16 -- target/dif.sh@18 -- # local sub_id=1 00:33:37.718 17:44:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 bdev_null1 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@30 -- # for sub in "$@" 00:33:37.718 17:44:16 -- target/dif.sh@31 -- # create_subsystem 2 00:33:37.718 17:44:16 -- target/dif.sh@18 -- # local sub_id=2 00:33:37.718 17:44:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 bdev_null2 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:37.718 17:44:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.718 17:44:16 -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 17:44:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.718 17:44:16 -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:37.718 17:44:16 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:37.718 17:44:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:37.718 17:44:16 -- nvmf/common.sh@520 -- # config=() 00:33:37.718 17:44:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.718 17:44:16 -- nvmf/common.sh@520 -- # local subsystem config 00:33:37.718 17:44:16 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.718 17:44:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:37.718 17:44:16 -- target/dif.sh@82 -- # gen_fio_conf 00:33:37.718 17:44:16 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:37.718 17:44:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:37.718 { 00:33:37.718 "params": { 00:33:37.718 "name": "Nvme$subsystem", 00:33:37.718 "trtype": "$TEST_TRANSPORT", 00:33:37.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.718 "adrfam": "ipv4", 00:33:37.718 "trsvcid": "$NVMF_PORT", 00:33:37.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.718 "hdgst": ${hdgst:-false}, 00:33:37.718 "ddgst": ${ddgst:-false} 00:33:37.718 }, 00:33:37.718 "method": "bdev_nvme_attach_controller" 00:33:37.718 } 00:33:37.718 EOF 00:33:37.718 )") 00:33:37.718 17:44:16 -- target/dif.sh@54 -- # local file 00:33:37.718 17:44:16 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:37.718 17:44:16 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:37.718 17:44:16 -- target/dif.sh@56 -- # cat 00:33:37.718 17:44:16 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:37.718 17:44:16 -- common/autotest_common.sh@1320 -- # shift 00:33:37.718 17:44:16 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:37.718 17:44:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:37.718 17:44:16 -- nvmf/common.sh@542 -- # cat 00:33:37.718 17:44:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:37.718 17:44:16 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:37.718 17:44:16 -- target/dif.sh@72 -- # (( file <= files )) 00:33:37.718 17:44:16 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:37.718 17:44:16 -- target/dif.sh@73 -- # cat 00:33:37.718 17:44:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:37.718 17:44:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:37.718 17:44:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:37.718 { 00:33:37.718 "params": { 00:33:37.718 "name": "Nvme$subsystem", 00:33:37.718 "trtype": "$TEST_TRANSPORT", 00:33:37.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.718 "adrfam": "ipv4", 00:33:37.718 "trsvcid": "$NVMF_PORT", 00:33:37.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.718 "hdgst": ${hdgst:-false}, 00:33:37.718 "ddgst": ${ddgst:-false} 00:33:37.718 }, 00:33:37.718 "method": "bdev_nvme_attach_controller" 00:33:37.718 } 00:33:37.718 EOF 00:33:37.718 )") 00:33:37.718 17:44:16 -- target/dif.sh@72 -- # (( file++ )) 00:33:37.718 17:44:16 -- target/dif.sh@72 -- # (( file <= files )) 00:33:37.718 17:44:16 -- nvmf/common.sh@542 -- # cat 00:33:37.718 17:44:16 -- target/dif.sh@73 -- # cat 00:33:37.718 17:44:16 -- target/dif.sh@72 -- # (( file++ )) 00:33:37.718 17:44:16 -- target/dif.sh@72 -- # (( file <= files )) 00:33:37.718 17:44:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:37.718 17:44:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:37.718 { 00:33:37.718 "params": { 00:33:37.718 "name": "Nvme$subsystem", 00:33:37.718 "trtype": "$TEST_TRANSPORT", 00:33:37.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.718 "adrfam": "ipv4", 00:33:37.718 "trsvcid": "$NVMF_PORT", 00:33:37.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.718 "hdgst": ${hdgst:-false}, 00:33:37.718 "ddgst": ${ddgst:-false} 00:33:37.718 }, 00:33:37.718 "method": "bdev_nvme_attach_controller" 00:33:37.718 } 00:33:37.718 EOF 00:33:37.718 )") 00:33:37.718 17:44:16 -- nvmf/common.sh@542 -- # cat 00:33:37.718 17:44:16 -- nvmf/common.sh@544 -- # jq . 00:33:37.718 17:44:16 -- nvmf/common.sh@545 -- # IFS=, 00:33:37.718 17:44:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:37.718 "params": { 00:33:37.718 "name": "Nvme0", 00:33:37.718 "trtype": "tcp", 00:33:37.718 "traddr": "10.0.0.2", 00:33:37.718 "adrfam": "ipv4", 00:33:37.718 "trsvcid": "4420", 00:33:37.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:37.718 "hdgst": false, 00:33:37.718 "ddgst": false 00:33:37.718 }, 00:33:37.718 "method": "bdev_nvme_attach_controller" 00:33:37.718 },{ 00:33:37.718 "params": { 00:33:37.718 "name": "Nvme1", 00:33:37.718 "trtype": "tcp", 00:33:37.718 "traddr": "10.0.0.2", 00:33:37.718 "adrfam": "ipv4", 00:33:37.718 "trsvcid": "4420", 00:33:37.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:37.718 "hdgst": false, 00:33:37.718 "ddgst": false 00:33:37.718 }, 00:33:37.718 "method": "bdev_nvme_attach_controller" 00:33:37.718 },{ 00:33:37.718 "params": { 00:33:37.718 "name": "Nvme2", 00:33:37.718 "trtype": "tcp", 00:33:37.718 "traddr": "10.0.0.2", 00:33:37.719 "adrfam": "ipv4", 00:33:37.719 "trsvcid": "4420", 00:33:37.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:37.719 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:37.719 "hdgst": false, 00:33:37.719 "ddgst": false 00:33:37.719 }, 00:33:37.719 "method": "bdev_nvme_attach_controller" 00:33:37.719 }' 00:33:37.719 17:44:16 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:37.719 17:44:16 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:37.719 17:44:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:37.719 17:44:16 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:37.719 17:44:16 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:37.719 17:44:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:37.719 17:44:16 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:37.719 17:44:16 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:37.719 17:44:16 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:37.719 17:44:16 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:38.014 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:38.014 ... 00:33:38.014 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:38.014 ... 00:33:38.014 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:38.014 ... 00:33:38.014 fio-3.35 00:33:38.014 Starting 24 threads 00:33:38.014 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.978 [2024-07-12 17:44:17.920461] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:38.979 [2024-07-12 17:44:17.920521] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:51.167 00:33:51.167 filename0: (groupid=0, jobs=1): err= 0: pid=146904: Fri Jul 12 17:44:28 2024 00:33:51.167 read: IOPS=445, BW=1781KiB/s (1824kB/s)(17.4MiB/10024msec) 00:33:51.167 slat (usec): min=10, max=138, avg=51.66, stdev=21.69 00:33:51.167 clat (usec): min=28535, max=47153, avg=35426.98, stdev=928.95 00:33:51.167 lat (usec): min=28552, max=47191, avg=35478.65, stdev=929.93 00:33:51.167 clat percentiles (usec): 00:33:51.167 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:33:51.167 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.167 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.167 | 99.00th=[36963], 99.50th=[36963], 99.90th=[46924], 99.95th=[46924], 00:33:51.167 | 99.99th=[46924] 00:33:51.167 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.167 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.167 lat (msec) : 50=100.00% 00:33:51.167 cpu : usr=98.70%, sys=0.89%, ctx=17, majf=0, minf=46 00:33:51.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.168 filename0: (groupid=0, jobs=1): err= 0: pid=146905: Fri Jul 12 17:44:28 2024 00:33:51.168 read: IOPS=444, BW=1778KiB/s (1820kB/s)(17.4MiB/10008msec) 00:33:51.168 slat (usec): min=5, max=113, avg=49.73, stdev=22.07 00:33:51.168 clat (usec): min=28498, max=67249, avg=35495.49, stdev=2001.83 00:33:51.168 lat (usec): min=28516, max=67267, avg=35545.22, stdev=2001.03 00:33:51.168 clat percentiles (usec): 00:33:51.168 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:33:51.168 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.168 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.168 | 99.00th=[36963], 99.50th=[37487], 99.90th=[67634], 99.95th=[67634], 00:33:51.168 | 99.99th=[67634] 00:33:51.168 bw ( KiB/s): min= 1664, max= 1792, per=4.13%, avg=1771.79, stdev=47.95, samples=19 00:33:51.168 iops : min= 416, max= 448, avg=442.95, stdev=11.99, samples=19 00:33:51.168 lat (msec) : 50=99.64%, 100=0.36% 00:33:51.168 cpu : usr=98.97%, sys=0.64%, ctx=12, majf=0, minf=43 00:33:51.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.168 filename0: (groupid=0, jobs=1): err= 0: pid=146906: Fri Jul 12 17:44:28 2024 00:33:51.168 read: IOPS=449, BW=1798KiB/s (1841kB/s)(17.6MiB/10003msec) 00:33:51.168 slat (nsec): min=9176, max=88328, avg=12500.45, stdev=5625.43 00:33:51.168 clat (usec): min=7851, max=42627, avg=35482.55, stdev=2855.30 00:33:51.168 lat (usec): min=7867, max=42637, avg=35495.05, stdev=2855.18 00:33:51.168 clat percentiles (usec): 00:33:51.168 | 1.00th=[11863], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:33:51.168 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:33:51.168 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:33:51.168 | 99.00th=[37487], 99.50th=[38011], 99.90th=[41681], 99.95th=[41681], 00:33:51.168 | 99.99th=[42730] 00:33:51.168 bw ( KiB/s): min= 1664, max= 2048, per=4.20%, avg=1798.74, stdev=67.11, samples=19 00:33:51.168 iops : min= 416, max= 512, avg=449.68, stdev=16.78, samples=19 00:33:51.168 lat (msec) : 10=0.71%, 20=0.36%, 50=98.93% 00:33:51.168 cpu : usr=99.00%, sys=0.61%, ctx=11, majf=0, minf=62 00:33:51.168 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:51.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.168 filename0: (groupid=0, jobs=1): err= 0: pid=146907: Fri Jul 12 17:44:28 2024 00:33:51.168 read: IOPS=444, BW=1779KiB/s (1821kB/s)(17.4MiB/10003msec) 00:33:51.168 slat (usec): min=6, max=134, avg=39.64, stdev=16.83 00:33:51.168 clat (usec): min=18037, max=60660, avg=35644.12, stdev=3368.02 00:33:51.168 lat (usec): min=18053, max=60676, avg=35683.76, stdev=3368.62 00:33:51.168 clat percentiles (usec): 00:33:51.168 | 1.00th=[19006], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.168 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.168 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.168 | 99.00th=[52691], 99.50th=[53216], 99.90th=[60556], 99.95th=[60556], 00:33:51.168 | 99.99th=[60556] 00:33:51.168 bw ( KiB/s): min= 1648, max= 1808, per=4.13%, avg=1771.79, stdev=48.83, samples=19 00:33:51.168 iops : min= 412, max= 452, avg=442.95, stdev=12.21, samples=19 00:33:51.168 lat (msec) : 20=1.24%, 50=96.96%, 100=1.80% 00:33:51.168 cpu : usr=98.80%, sys=0.80%, ctx=11, majf=0, minf=46 00:33:51.168 IO depths : 1=4.7%, 2=10.7%, 4=24.3%, 8=52.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:33:51.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.168 filename0: (groupid=0, jobs=1): err= 0: pid=146908: Fri Jul 12 17:44:28 2024 00:33:51.168 read: IOPS=445, BW=1781KiB/s (1824kB/s)(17.4MiB/10025msec) 00:33:51.168 slat (usec): min=9, max=109, avg=36.27, stdev=18.65 00:33:51.168 clat (usec): min=29011, max=47371, avg=35636.83, stdev=894.20 00:33:51.168 lat (usec): min=29044, max=47387, avg=35673.09, stdev=891.61 00:33:51.168 clat percentiles (usec): 00:33:51.168 | 1.00th=[34341], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:33:51.168 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:33:51.168 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.168 | 99.00th=[36963], 99.50th=[37487], 99.90th=[47449], 99.95th=[47449], 00:33:51.168 | 99.99th=[47449] 00:33:51.168 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.168 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.168 lat (msec) : 50=100.00% 00:33:51.168 cpu : usr=98.95%, sys=0.64%, ctx=16, majf=0, minf=50 00:33:51.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.168 filename0: (groupid=0, jobs=1): err= 0: pid=146909: Fri Jul 12 17:44:28 2024 00:33:51.168 read: IOPS=445, BW=1782KiB/s (1824kB/s)(17.4MiB/10023msec) 00:33:51.168 slat (usec): min=9, max=153, avg=52.15, stdev=21.49 00:33:51.168 clat (usec): min=28469, max=47295, avg=35425.37, stdev=937.40 00:33:51.168 lat (usec): min=28502, max=47317, avg=35477.53, stdev=938.04 00:33:51.168 clat percentiles (usec): 00:33:51.168 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:33:51.168 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.168 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.168 | 99.00th=[36963], 99.50th=[37487], 99.90th=[47449], 99.95th=[47449], 00:33:51.168 | 99.99th=[47449] 00:33:51.168 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.168 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.168 lat (msec) : 50=100.00% 00:33:51.168 cpu : usr=99.10%, sys=0.50%, ctx=21, majf=0, minf=37 00:33:51.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.168 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.168 filename0: (groupid=0, jobs=1): err= 0: pid=146910: Fri Jul 12 17:44:28 2024 00:33:51.168 read: IOPS=446, BW=1784KiB/s (1827kB/s)(17.4MiB/10008msec) 00:33:51.168 slat (nsec): min=5053, max=99860, avg=35779.41, stdev=14960.55 00:33:51.168 clat (usec): min=8915, max=62157, avg=35541.26, stdev=3671.34 00:33:51.168 lat (usec): min=8932, max=62187, avg=35577.04, stdev=3671.78 00:33:51.168 clat percentiles (usec): 00:33:51.168 | 1.00th=[10552], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.168 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.168 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.168 | 99.00th=[57934], 99.50th=[61080], 99.90th=[61604], 99.95th=[61604], 00:33:51.168 | 99.99th=[62129] 00:33:51.169 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.169 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.169 lat (msec) : 10=0.60%, 20=0.40%, 50=97.89%, 100=1.10% 00:33:51.169 cpu : usr=99.03%, sys=0.57%, ctx=12, majf=0, minf=48 00:33:51.169 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:51.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.169 filename0: (groupid=0, jobs=1): err= 0: pid=146911: Fri Jul 12 17:44:28 2024 00:33:51.169 read: IOPS=448, BW=1796KiB/s (1839kB/s)(17.6MiB/10016msec) 00:33:51.169 slat (usec): min=9, max=170, avg=50.79, stdev=22.54 00:33:51.169 clat (usec): min=7788, max=43241, avg=35193.35, stdev=2851.43 00:33:51.169 lat (usec): min=7798, max=43321, avg=35244.14, stdev=2854.97 00:33:51.169 clat percentiles (usec): 00:33:51.169 | 1.00th=[ 9634], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:33:51.169 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.169 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.169 | 99.00th=[37487], 99.50th=[38011], 99.90th=[42730], 99.95th=[43254], 00:33:51.169 | 99.99th=[43254] 00:33:51.169 bw ( KiB/s): min= 1664, max= 2048, per=4.18%, avg=1792.00, stdev=71.93, samples=20 00:33:51.169 iops : min= 416, max= 512, avg=448.00, stdev=17.98, samples=20 00:33:51.169 lat (msec) : 10=1.07%, 50=98.93% 00:33:51.169 cpu : usr=98.88%, sys=0.73%, ctx=11, majf=0, minf=37 00:33:51.169 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:51.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.169 filename1: (groupid=0, jobs=1): err= 0: pid=146912: Fri Jul 12 17:44:28 2024 00:33:51.169 read: IOPS=448, BW=1795KiB/s (1838kB/s)(17.6MiB/10019msec) 00:33:51.169 slat (usec): min=9, max=110, avg=45.15, stdev=24.46 00:33:51.169 clat (usec): min=7890, max=43121, avg=35282.89, stdev=2780.92 00:33:51.169 lat (usec): min=7905, max=43186, avg=35328.04, stdev=2782.99 00:33:51.169 clat percentiles (usec): 00:33:51.169 | 1.00th=[10814], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:33:51.169 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:33:51.169 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.169 | 99.00th=[37487], 99.50th=[38011], 99.90th=[42730], 99.95th=[42730], 00:33:51.169 | 99.99th=[43254] 00:33:51.169 bw ( KiB/s): min= 1664, max= 1923, per=4.18%, avg=1792.15, stdev=42.02, samples=20 00:33:51.169 iops : min= 416, max= 480, avg=448.00, stdev=10.38, samples=20 00:33:51.169 lat (msec) : 10=0.71%, 20=0.36%, 50=98.93% 00:33:51.169 cpu : usr=98.65%, sys=0.94%, ctx=16, majf=0, minf=56 00:33:51.169 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:51.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.169 filename1: (groupid=0, jobs=1): err= 0: pid=146913: Fri Jul 12 17:44:28 2024 00:33:51.169 read: IOPS=445, BW=1782KiB/s (1824kB/s)(17.4MiB/10023msec) 00:33:51.169 slat (usec): min=9, max=114, avg=50.19, stdev=22.96 00:33:51.169 clat (usec): min=28508, max=47130, avg=35499.64, stdev=964.13 00:33:51.169 lat (usec): min=28554, max=47161, avg=35549.83, stdev=962.30 00:33:51.169 clat percentiles (usec): 00:33:51.169 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:33:51.169 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.169 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.169 | 99.00th=[37487], 99.50th=[39060], 99.90th=[46924], 99.95th=[46924], 00:33:51.169 | 99.99th=[46924] 00:33:51.169 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.169 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.169 lat (msec) : 50=100.00% 00:33:51.169 cpu : usr=98.93%, sys=0.67%, ctx=12, majf=0, minf=47 00:33:51.169 IO depths : 1=5.5%, 2=11.7%, 4=24.8%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:51.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.169 filename1: (groupid=0, jobs=1): err= 0: pid=146914: Fri Jul 12 17:44:28 2024 00:33:51.169 read: IOPS=445, BW=1782KiB/s (1824kB/s)(17.4MiB/10023msec) 00:33:51.169 slat (usec): min=10, max=110, avg=51.04, stdev=21.71 00:33:51.169 clat (usec): min=28471, max=47256, avg=35421.92, stdev=939.16 00:33:51.169 lat (usec): min=28488, max=47280, avg=35472.97, stdev=940.52 00:33:51.169 clat percentiles (usec): 00:33:51.169 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:33:51.169 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.169 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.169 | 99.00th=[36963], 99.50th=[37487], 99.90th=[46924], 99.95th=[47449], 00:33:51.169 | 99.99th=[47449] 00:33:51.169 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.169 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.169 lat (msec) : 50=100.00% 00:33:51.169 cpu : usr=98.84%, sys=0.77%, ctx=12, majf=0, minf=37 00:33:51.169 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.169 filename1: (groupid=0, jobs=1): err= 0: pid=146915: Fri Jul 12 17:44:28 2024 00:33:51.169 read: IOPS=444, BW=1780KiB/s (1823kB/s)(17.4MiB/10010msec) 00:33:51.169 slat (nsec): min=6023, max=94210, avg=34645.97, stdev=15305.14 00:33:51.169 clat (usec): min=18519, max=68095, avg=35673.77, stdev=2854.79 00:33:51.169 lat (usec): min=18533, max=68111, avg=35708.42, stdev=2854.11 00:33:51.169 clat percentiles (usec): 00:33:51.169 | 1.00th=[28967], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.169 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:33:51.169 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:33:51.169 | 99.00th=[38536], 99.50th=[59507], 99.90th=[67634], 99.95th=[67634], 00:33:51.169 | 99.99th=[67634] 00:33:51.169 bw ( KiB/s): min= 1664, max= 1840, per=4.14%, avg=1774.32, stdev=50.28, samples=19 00:33:51.169 iops : min= 416, max= 460, avg=443.58, stdev=12.57, samples=19 00:33:51.169 lat (msec) : 20=0.70%, 50=98.47%, 100=0.83% 00:33:51.169 cpu : usr=98.82%, sys=0.71%, ctx=25, majf=0, minf=50 00:33:51.169 IO depths : 1=5.9%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:51.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 issued rwts: total=4454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.169 filename1: (groupid=0, jobs=1): err= 0: pid=146916: Fri Jul 12 17:44:28 2024 00:33:51.169 read: IOPS=445, BW=1784KiB/s (1827kB/s)(17.4MiB/10009msec) 00:33:51.169 slat (usec): min=5, max=100, avg=35.42, stdev=14.81 00:33:51.169 clat (usec): min=9717, max=57999, avg=35533.49, stdev=2109.37 00:33:51.169 lat (usec): min=9732, max=58012, avg=35568.91, stdev=2109.32 00:33:51.169 clat percentiles (usec): 00:33:51.169 | 1.00th=[34341], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:33:51.169 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.169 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.169 | 99.00th=[36963], 99.50th=[37487], 99.90th=[57934], 99.95th=[57934], 00:33:51.169 | 99.99th=[57934] 00:33:51.169 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.169 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.169 lat (msec) : 10=0.27%, 20=0.09%, 50=99.28%, 100=0.36% 00:33:51.169 cpu : usr=98.69%, sys=0.92%, ctx=10, majf=0, minf=55 00:33:51.169 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.169 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.170 filename1: (groupid=0, jobs=1): err= 0: pid=146917: Fri Jul 12 17:44:28 2024 00:33:51.170 read: IOPS=445, BW=1782KiB/s (1824kB/s)(17.4MiB/10023msec) 00:33:51.170 slat (usec): min=9, max=129, avg=50.95, stdev=21.57 00:33:51.170 clat (usec): min=28500, max=47213, avg=35423.49, stdev=909.22 00:33:51.170 lat (usec): min=28531, max=47234, avg=35474.44, stdev=910.46 00:33:51.170 clat percentiles (usec): 00:33:51.170 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:33:51.170 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.170 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.170 | 99.00th=[36963], 99.50th=[36963], 99.90th=[46924], 99.95th=[46924], 00:33:51.170 | 99.99th=[47449] 00:33:51.170 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.170 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.170 lat (msec) : 50=100.00% 00:33:51.170 cpu : usr=98.97%, sys=0.63%, ctx=9, majf=0, minf=32 00:33:51.170 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.170 filename1: (groupid=0, jobs=1): err= 0: pid=146918: Fri Jul 12 17:44:28 2024 00:33:51.170 read: IOPS=444, BW=1779KiB/s (1822kB/s)(17.4MiB/10002msec) 00:33:51.170 slat (usec): min=5, max=111, avg=40.58, stdev=16.05 00:33:51.170 clat (usec): min=29155, max=60201, avg=35602.70, stdev=1575.08 00:33:51.170 lat (usec): min=29180, max=60217, avg=35643.28, stdev=1573.73 00:33:51.170 clat percentiles (usec): 00:33:51.170 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.170 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.170 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.170 | 99.00th=[36963], 99.50th=[37487], 99.90th=[60031], 99.95th=[60031], 00:33:51.170 | 99.99th=[60031] 00:33:51.170 bw ( KiB/s): min= 1664, max= 1792, per=4.13%, avg=1771.79, stdev=47.95, samples=19 00:33:51.170 iops : min= 416, max= 448, avg=442.95, stdev=11.99, samples=19 00:33:51.170 lat (msec) : 50=99.64%, 100=0.36% 00:33:51.170 cpu : usr=98.93%, sys=0.66%, ctx=11, majf=0, minf=42 00:33:51.170 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.170 filename1: (groupid=0, jobs=1): err= 0: pid=146919: Fri Jul 12 17:44:28 2024 00:33:51.170 read: IOPS=461, BW=1845KiB/s (1889kB/s)(18.0MiB/10008msec) 00:33:51.170 slat (usec): min=5, max=112, avg=18.97, stdev=16.14 00:33:51.170 clat (usec): min=9522, max=66506, avg=34608.19, stdev=6342.67 00:33:51.170 lat (usec): min=9532, max=66529, avg=34627.16, stdev=6343.15 00:33:51.170 clat percentiles (usec): 00:33:51.170 | 1.00th=[19006], 5.00th=[21103], 10.00th=[26870], 20.00th=[29230], 00:33:51.170 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35914], 60.00th=[35914], 00:33:51.170 | 70.00th=[35914], 80.00th=[35914], 90.00th=[43254], 95.00th=[44303], 00:33:51.170 | 99.00th=[57410], 99.50th=[59507], 99.90th=[66323], 99.95th=[66323], 00:33:51.170 | 99.99th=[66323] 00:33:51.170 bw ( KiB/s): min= 1563, max= 1984, per=4.29%, avg=1840.15, stdev=88.85, samples=20 00:33:51.170 iops : min= 390, max= 496, avg=460.00, stdev=22.34, samples=20 00:33:51.170 lat (msec) : 10=0.22%, 20=3.38%, 50=94.82%, 100=1.58% 00:33:51.170 cpu : usr=98.96%, sys=0.64%, ctx=13, majf=0, minf=57 00:33:51.170 IO depths : 1=0.1%, 2=0.5%, 4=3.7%, 8=79.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:33:51.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 complete : 0=0.0%, 4=89.3%, 8=8.6%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 issued rwts: total=4616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.170 filename2: (groupid=0, jobs=1): err= 0: pid=146920: Fri Jul 12 17:44:28 2024 00:33:51.170 read: IOPS=445, BW=1782KiB/s (1824kB/s)(17.4MiB/10023msec) 00:33:51.170 slat (usec): min=12, max=114, avg=37.17, stdev=13.88 00:33:51.170 clat (usec): min=28712, max=47221, avg=35597.81, stdev=912.12 00:33:51.170 lat (usec): min=28739, max=47253, avg=35634.97, stdev=911.22 00:33:51.170 clat percentiles (usec): 00:33:51.170 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.170 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.170 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:33:51.170 | 99.00th=[36963], 99.50th=[37487], 99.90th=[46924], 99.95th=[46924], 00:33:51.170 | 99.99th=[47449] 00:33:51.170 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.170 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.170 lat (msec) : 50=100.00% 00:33:51.170 cpu : usr=97.99%, sys=1.21%, ctx=86, majf=0, minf=45 00:33:51.170 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.170 filename2: (groupid=0, jobs=1): err= 0: pid=146921: Fri Jul 12 17:44:28 2024 00:33:51.170 read: IOPS=444, BW=1779KiB/s (1822kB/s)(17.4MiB/10002msec) 00:33:51.170 slat (usec): min=4, max=105, avg=40.83, stdev=16.52 00:33:51.170 clat (usec): min=19299, max=59367, avg=35605.36, stdev=1609.55 00:33:51.170 lat (usec): min=19310, max=59380, avg=35646.19, stdev=1608.05 00:33:51.170 clat percentiles (usec): 00:33:51.170 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.170 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.170 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.170 | 99.00th=[36963], 99.50th=[37487], 99.90th=[59507], 99.95th=[59507], 00:33:51.170 | 99.99th=[59507] 00:33:51.170 bw ( KiB/s): min= 1664, max= 1792, per=4.13%, avg=1771.95, stdev=47.58, samples=19 00:33:51.170 iops : min= 416, max= 448, avg=442.95, stdev=11.99, samples=19 00:33:51.170 lat (msec) : 20=0.04%, 50=99.55%, 100=0.40% 00:33:51.170 cpu : usr=98.90%, sys=0.69%, ctx=11, majf=0, minf=54 00:33:51.170 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.170 filename2: (groupid=0, jobs=1): err= 0: pid=146922: Fri Jul 12 17:44:28 2024 00:33:51.170 read: IOPS=445, BW=1781KiB/s (1824kB/s)(17.4MiB/10025msec) 00:33:51.170 slat (nsec): min=5097, max=97383, avg=20045.13, stdev=17078.97 00:33:51.170 clat (usec): min=29570, max=47427, avg=35756.99, stdev=875.61 00:33:51.170 lat (usec): min=29581, max=47439, avg=35777.04, stdev=873.44 00:33:51.170 clat percentiles (usec): 00:33:51.170 | 1.00th=[34341], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:33:51.170 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:33:51.170 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:33:51.170 | 99.00th=[36963], 99.50th=[38011], 99.90th=[47449], 99.95th=[47449], 00:33:51.170 | 99.99th=[47449] 00:33:51.170 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1779.20, stdev=39.40, samples=20 00:33:51.170 iops : min= 416, max= 448, avg=444.80, stdev= 9.85, samples=20 00:33:51.170 lat (msec) : 50=100.00% 00:33:51.170 cpu : usr=98.83%, sys=0.77%, ctx=9, majf=0, minf=37 00:33:51.170 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.170 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.170 filename2: (groupid=0, jobs=1): err= 0: pid=146923: Fri Jul 12 17:44:28 2024 00:33:51.170 read: IOPS=444, BW=1778KiB/s (1820kB/s)(17.4MiB/10009msec) 00:33:51.170 slat (usec): min=6, max=138, avg=38.57, stdev=16.43 00:33:51.170 clat (usec): min=17914, max=68778, avg=35673.50, stdev=2328.55 00:33:51.170 lat (usec): min=17927, max=68794, avg=35712.08, stdev=2327.25 00:33:51.170 clat percentiles (usec): 00:33:51.170 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.170 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:33:51.171 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.171 | 99.00th=[36963], 99.50th=[52691], 99.90th=[67634], 99.95th=[67634], 00:33:51.171 | 99.99th=[68682] 00:33:51.171 bw ( KiB/s): min= 1664, max= 1792, per=4.13%, avg=1771.79, stdev=47.95, samples=19 00:33:51.171 iops : min= 416, max= 448, avg=442.95, stdev=11.99, samples=19 00:33:51.171 lat (msec) : 20=0.22%, 50=99.19%, 100=0.58% 00:33:51.171 cpu : usr=98.81%, sys=0.77%, ctx=13, majf=0, minf=51 00:33:51.171 IO depths : 1=5.6%, 2=11.6%, 4=23.8%, 8=52.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:51.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.171 filename2: (groupid=0, jobs=1): err= 0: pid=146924: Fri Jul 12 17:44:28 2024 00:33:51.171 read: IOPS=448, BW=1794KiB/s (1837kB/s)(17.6MiB/10023msec) 00:33:51.171 slat (usec): min=3, max=107, avg=34.63, stdev=22.99 00:33:51.171 clat (usec): min=8227, max=43236, avg=35408.79, stdev=2629.31 00:33:51.171 lat (usec): min=8248, max=43298, avg=35443.42, stdev=2630.45 00:33:51.171 clat percentiles (usec): 00:33:51.171 | 1.00th=[15139], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.171 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35914], 60.00th=[35914], 00:33:51.171 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:33:51.171 | 99.00th=[36963], 99.50th=[37487], 99.90th=[42730], 99.95th=[43254], 00:33:51.171 | 99.99th=[43254] 00:33:51.171 bw ( KiB/s): min= 1664, max= 1920, per=4.18%, avg=1792.00, stdev=41.53, samples=20 00:33:51.171 iops : min= 416, max= 480, avg=448.00, stdev=10.38, samples=20 00:33:51.171 lat (msec) : 10=0.71%, 20=0.36%, 50=98.93% 00:33:51.171 cpu : usr=98.72%, sys=0.92%, ctx=17, majf=0, minf=58 00:33:51.171 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.171 filename2: (groupid=0, jobs=1): err= 0: pid=146925: Fri Jul 12 17:44:28 2024 00:33:51.171 read: IOPS=448, BW=1794KiB/s (1837kB/s)(17.6MiB/10025msec) 00:33:51.171 slat (usec): min=4, max=116, avg=29.68, stdev=22.54 00:33:51.171 clat (usec): min=7890, max=43165, avg=35455.59, stdev=2590.26 00:33:51.171 lat (usec): min=7921, max=43216, avg=35485.27, stdev=2591.19 00:33:51.171 clat percentiles (usec): 00:33:51.171 | 1.00th=[16712], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.171 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35914], 60.00th=[35914], 00:33:51.171 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:33:51.171 | 99.00th=[37487], 99.50th=[38011], 99.90th=[42730], 99.95th=[43254], 00:33:51.171 | 99.99th=[43254] 00:33:51.171 bw ( KiB/s): min= 1664, max= 1920, per=4.18%, avg=1792.00, stdev=41.53, samples=20 00:33:51.171 iops : min= 416, max= 480, avg=448.00, stdev=10.38, samples=20 00:33:51.171 lat (msec) : 10=0.40%, 20=0.67%, 50=98.93% 00:33:51.171 cpu : usr=98.95%, sys=0.64%, ctx=17, majf=0, minf=49 00:33:51.171 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:51.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.171 filename2: (groupid=0, jobs=1): err= 0: pid=146926: Fri Jul 12 17:44:28 2024 00:33:51.171 read: IOPS=449, BW=1796KiB/s (1839kB/s)(17.6MiB/10008msec) 00:33:51.171 slat (usec): min=9, max=163, avg=39.67, stdev=24.27 00:33:51.171 clat (usec): min=9306, max=67636, avg=35340.26, stdev=5647.83 00:33:51.171 lat (usec): min=9329, max=67657, avg=35379.93, stdev=5646.92 00:33:51.171 clat percentiles (usec): 00:33:51.171 | 1.00th=[19268], 5.00th=[26870], 10.00th=[28181], 20.00th=[34866], 00:33:51.171 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.171 | 70.00th=[35914], 80.00th=[35914], 90.00th=[37487], 95.00th=[44303], 00:33:51.171 | 99.00th=[53740], 99.50th=[58983], 99.90th=[66847], 99.95th=[66847], 00:33:51.171 | 99.99th=[67634] 00:33:51.171 bw ( KiB/s): min= 1664, max= 1952, per=4.18%, avg=1791.35, stdev=79.41, samples=20 00:33:51.171 iops : min= 416, max= 488, avg=447.80, stdev=19.91, samples=20 00:33:51.171 lat (msec) : 10=0.29%, 20=1.27%, 50=95.08%, 100=3.36% 00:33:51.171 cpu : usr=98.49%, sys=0.99%, ctx=12, majf=0, minf=48 00:33:51.171 IO depths : 1=2.6%, 2=5.4%, 4=12.7%, 8=67.4%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:51.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 complete : 0=0.0%, 4=91.1%, 8=5.1%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 issued rwts: total=4494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.171 filename2: (groupid=0, jobs=1): err= 0: pid=146927: Fri Jul 12 17:44:28 2024 00:33:51.171 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10009msec) 00:33:51.171 slat (usec): min=4, max=105, avg=35.34, stdev=15.06 00:33:51.171 clat (usec): min=8278, max=66756, avg=35535.94, stdev=3320.42 00:33:51.171 lat (usec): min=8291, max=66774, avg=35571.28, stdev=3320.72 00:33:51.171 clat percentiles (usec): 00:33:51.171 | 1.00th=[21627], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:33:51.171 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:33:51.171 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:33:51.171 | 99.00th=[41681], 99.50th=[58983], 99.90th=[66847], 99.95th=[66847], 00:33:51.171 | 99.99th=[66847] 00:33:51.171 bw ( KiB/s): min= 1648, max= 1840, per=4.15%, avg=1780.00, stdev=41.81, samples=20 00:33:51.171 iops : min= 412, max= 460, avg=445.00, stdev=10.45, samples=20 00:33:51.171 lat (msec) : 10=0.45%, 20=0.27%, 50=98.43%, 100=0.85% 00:33:51.171 cpu : usr=99.02%, sys=0.58%, ctx=15, majf=0, minf=72 00:33:51.171 IO depths : 1=5.3%, 2=11.3%, 4=24.3%, 8=51.8%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:51.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 issued rwts: total=4466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.171 00:33:51.171 Run status group 0 (all jobs): 00:33:51.171 READ: bw=41.8MiB/s (43.9MB/s), 1778KiB/s-1845KiB/s (1820kB/s-1889kB/s), io=419MiB (440MB), run=10002-10025msec 00:33:51.171 17:44:28 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:51.171 17:44:28 -- target/dif.sh@43 -- # local sub 00:33:51.171 17:44:28 -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.171 17:44:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:51.171 17:44:28 -- target/dif.sh@36 -- # local sub_id=0 00:33:51.171 17:44:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:51.171 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.171 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.171 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.171 17:44:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:51.171 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.171 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.171 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.171 17:44:28 -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.171 17:44:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:51.171 17:44:28 -- target/dif.sh@36 -- # local sub_id=1 00:33:51.172 17:44:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.172 17:44:28 -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:51.172 17:44:28 -- target/dif.sh@36 -- # local sub_id=2 00:33:51.172 17:44:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@115 -- # NULL_DIF=1 00:33:51.172 17:44:28 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:51.172 17:44:28 -- target/dif.sh@115 -- # numjobs=2 00:33:51.172 17:44:28 -- target/dif.sh@115 -- # iodepth=8 00:33:51.172 17:44:28 -- target/dif.sh@115 -- # runtime=5 00:33:51.172 17:44:28 -- target/dif.sh@115 -- # files=1 00:33:51.172 17:44:28 -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:51.172 17:44:28 -- target/dif.sh@28 -- # local sub 00:33:51.172 17:44:28 -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.172 17:44:28 -- target/dif.sh@31 -- # create_subsystem 0 00:33:51.172 17:44:28 -- target/dif.sh@18 -- # local sub_id=0 00:33:51.172 17:44:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 bdev_null0 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 [2024-07-12 17:44:28.395553] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.172 17:44:28 -- target/dif.sh@31 -- # create_subsystem 1 00:33:51.172 17:44:28 -- target/dif.sh@18 -- # local sub_id=1 00:33:51.172 17:44:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 bdev_null1 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.172 17:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.172 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 17:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.172 17:44:28 -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:51.172 17:44:28 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:51.172 17:44:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:51.172 17:44:28 -- nvmf/common.sh@520 -- # config=() 00:33:51.172 17:44:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.172 17:44:28 -- nvmf/common.sh@520 -- # local subsystem config 00:33:51.172 17:44:28 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.172 17:44:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:51.172 17:44:28 -- target/dif.sh@82 -- # gen_fio_conf 00:33:51.172 17:44:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:51.172 { 00:33:51.172 "params": { 00:33:51.172 "name": "Nvme$subsystem", 00:33:51.172 "trtype": "$TEST_TRANSPORT", 00:33:51.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.172 "adrfam": "ipv4", 00:33:51.172 "trsvcid": "$NVMF_PORT", 00:33:51.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.172 "hdgst": ${hdgst:-false}, 00:33:51.172 "ddgst": ${ddgst:-false} 00:33:51.172 }, 00:33:51.172 "method": "bdev_nvme_attach_controller" 00:33:51.172 } 00:33:51.172 EOF 00:33:51.172 )") 00:33:51.172 17:44:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:51.172 17:44:28 -- target/dif.sh@54 -- # local file 00:33:51.172 17:44:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.172 17:44:28 -- target/dif.sh@56 -- # cat 00:33:51.172 17:44:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:51.172 17:44:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.172 17:44:28 -- common/autotest_common.sh@1320 -- # shift 00:33:51.172 17:44:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:51.172 17:44:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.172 17:44:28 -- nvmf/common.sh@542 -- # cat 00:33:51.172 17:44:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:51.172 17:44:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.172 17:44:28 -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.172 17:44:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:51.172 17:44:28 -- target/dif.sh@73 -- # cat 00:33:51.172 17:44:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:51.172 17:44:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:51.172 17:44:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:51.172 { 00:33:51.172 "params": { 00:33:51.172 "name": "Nvme$subsystem", 00:33:51.172 "trtype": "$TEST_TRANSPORT", 00:33:51.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.172 "adrfam": "ipv4", 00:33:51.172 "trsvcid": "$NVMF_PORT", 00:33:51.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.172 "hdgst": ${hdgst:-false}, 00:33:51.172 "ddgst": ${ddgst:-false} 00:33:51.172 }, 00:33:51.172 "method": "bdev_nvme_attach_controller" 00:33:51.172 } 00:33:51.172 EOF 00:33:51.172 )") 00:33:51.173 17:44:28 -- target/dif.sh@72 -- # (( file++ )) 00:33:51.173 17:44:28 -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.173 17:44:28 -- nvmf/common.sh@542 -- # cat 00:33:51.173 17:44:28 -- nvmf/common.sh@544 -- # jq . 00:33:51.173 17:44:28 -- nvmf/common.sh@545 -- # IFS=, 00:33:51.173 17:44:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:51.173 "params": { 00:33:51.173 "name": "Nvme0", 00:33:51.173 "trtype": "tcp", 00:33:51.173 "traddr": "10.0.0.2", 00:33:51.173 "adrfam": "ipv4", 00:33:51.173 "trsvcid": "4420", 00:33:51.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.173 "hdgst": false, 00:33:51.173 "ddgst": false 00:33:51.173 }, 00:33:51.173 "method": "bdev_nvme_attach_controller" 00:33:51.173 },{ 00:33:51.173 "params": { 00:33:51.173 "name": "Nvme1", 00:33:51.173 "trtype": "tcp", 00:33:51.173 "traddr": "10.0.0.2", 00:33:51.173 "adrfam": "ipv4", 00:33:51.173 "trsvcid": "4420", 00:33:51.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.173 "hdgst": false, 00:33:51.173 "ddgst": false 00:33:51.173 }, 00:33:51.173 "method": "bdev_nvme_attach_controller" 00:33:51.173 }' 00:33:51.173 17:44:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:51.173 17:44:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:51.173 17:44:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.173 17:44:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.173 17:44:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:51.173 17:44:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:51.173 17:44:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:51.173 17:44:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:51.173 17:44:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:51.173 17:44:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.173 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:51.173 ... 00:33:51.173 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:51.173 ... 00:33:51.173 fio-3.35 00:33:51.173 Starting 4 threads 00:33:51.173 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.173 [2024-07-12 17:44:29.561700] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:51.173 [2024-07-12 17:44:29.561750] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:56.437 00:33:56.437 filename0: (groupid=0, jobs=1): err= 0: pid=148980: Fri Jul 12 17:44:34 2024 00:33:56.437 read: IOPS=1950, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5003msec) 00:33:56.437 slat (nsec): min=9193, max=42361, avg=13109.26, stdev=4223.23 00:33:56.437 clat (usec): min=1176, max=9381, avg=4059.88, stdev=744.62 00:33:56.437 lat (usec): min=1190, max=9390, avg=4072.99, stdev=744.76 00:33:56.437 clat percentiles (usec): 00:33:56.437 | 1.00th=[ 2409], 5.00th=[ 2966], 10.00th=[ 3228], 20.00th=[ 3556], 00:33:56.437 | 30.00th=[ 3720], 40.00th=[ 3916], 50.00th=[ 4080], 60.00th=[ 4228], 00:33:56.437 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5538], 00:33:56.437 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7832], 00:33:56.437 | 99.99th=[ 9372] 00:33:56.437 bw ( KiB/s): min=14528, max=16880, per=26.50%, avg=15608.00, stdev=727.85, samples=10 00:33:56.437 iops : min= 1816, max= 2110, avg=1951.00, stdev=90.98, samples=10 00:33:56.437 lat (msec) : 2=0.51%, 4=46.11%, 10=53.38% 00:33:56.437 cpu : usr=96.78%, sys=2.82%, ctx=10, majf=0, minf=0 00:33:56.437 IO depths : 1=0.1%, 2=8.7%, 4=63.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.437 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.437 issued rwts: total=9760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.437 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.438 filename0: (groupid=0, jobs=1): err= 0: pid=148981: Fri Jul 12 17:44:34 2024 00:33:56.438 read: IOPS=1819, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5002msec) 00:33:56.438 slat (nsec): min=7483, max=43179, avg=13534.87, stdev=4398.15 00:33:56.438 clat (usec): min=829, max=7876, avg=4354.81, stdev=725.94 00:33:56.438 lat (usec): min=845, max=7890, avg=4368.34, stdev=725.71 00:33:56.438 clat percentiles (usec): 00:33:56.438 | 1.00th=[ 2900], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3916], 00:33:56.438 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:33:56.438 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5276], 95.00th=[ 5932], 00:33:56.438 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7635], 99.95th=[ 7767], 00:33:56.438 | 99.99th=[ 7898] 00:33:56.438 bw ( KiB/s): min=13984, max=15280, per=24.71%, avg=14554.80, stdev=352.14, samples=10 00:33:56.438 iops : min= 1748, max= 1910, avg=1819.30, stdev=44.07, samples=10 00:33:56.438 lat (usec) : 1000=0.04% 00:33:56.438 lat (msec) : 2=0.19%, 4=26.13%, 10=73.64% 00:33:56.438 cpu : usr=96.48%, sys=3.10%, ctx=9, majf=0, minf=9 00:33:56.438 IO depths : 1=0.3%, 2=6.2%, 4=65.4%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.438 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.438 issued rwts: total=9103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.438 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.438 filename1: (groupid=0, jobs=1): err= 0: pid=148982: Fri Jul 12 17:44:34 2024 00:33:56.438 read: IOPS=1771, BW=13.8MiB/s (14.5MB/s)(69.2MiB/5001msec) 00:33:56.438 slat (usec): min=7, max=128, avg=13.43, stdev= 4.69 00:33:56.438 clat (usec): min=704, max=8032, avg=4476.36, stdev=814.02 00:33:56.438 lat (usec): min=715, max=8043, avg=4489.79, stdev=813.38 00:33:56.438 clat percentiles (usec): 00:33:56.438 | 1.00th=[ 3032], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 4015], 00:33:56.438 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:33:56.438 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5538], 95.00th=[ 6521], 00:33:56.438 | 99.00th=[ 7242], 99.50th=[ 7635], 99.90th=[ 7963], 99.95th=[ 8029], 00:33:56.438 | 99.99th=[ 8029] 00:33:56.438 bw ( KiB/s): min=12848, max=14832, per=23.88%, avg=14065.78, stdev=672.59, samples=9 00:33:56.438 iops : min= 1606, max= 1854, avg=1758.22, stdev=84.07, samples=9 00:33:56.438 lat (usec) : 750=0.01%, 1000=0.08% 00:33:56.438 lat (msec) : 2=0.10%, 4=19.49%, 10=80.31% 00:33:56.438 cpu : usr=96.64%, sys=2.94%, ctx=7, majf=0, minf=9 00:33:56.438 IO depths : 1=0.1%, 2=4.9%, 4=66.4%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.438 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.438 issued rwts: total=8859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.438 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.438 filename1: (groupid=0, jobs=1): err= 0: pid=148984: Fri Jul 12 17:44:34 2024 00:33:56.438 read: IOPS=1821, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5003msec) 00:33:56.438 slat (nsec): min=6361, max=67406, avg=13351.36, stdev=4457.86 00:33:56.438 clat (usec): min=1052, max=8342, avg=4351.74, stdev=777.53 00:33:56.438 lat (usec): min=1073, max=8356, avg=4365.09, stdev=777.35 00:33:56.438 clat percentiles (usec): 00:33:56.438 | 1.00th=[ 2900], 5.00th=[ 3359], 10.00th=[ 3589], 20.00th=[ 3884], 00:33:56.438 | 30.00th=[ 4015], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:33:56.438 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5276], 95.00th=[ 6128], 00:33:56.438 | 99.00th=[ 7046], 99.50th=[ 7242], 99.90th=[ 7570], 99.95th=[ 7701], 00:33:56.438 | 99.99th=[ 8356] 00:33:56.438 bw ( KiB/s): min=14096, max=15248, per=24.74%, avg=14574.40, stdev=367.07, samples=10 00:33:56.438 iops : min= 1762, max= 1906, avg=1821.80, stdev=45.88, samples=10 00:33:56.438 lat (msec) : 2=0.15%, 4=28.96%, 10=70.89% 00:33:56.438 cpu : usr=96.30%, sys=3.30%, ctx=8, majf=0, minf=9 00:33:56.438 IO depths : 1=0.1%, 2=4.7%, 4=66.9%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.438 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.438 issued rwts: total=9114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.438 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.438 00:33:56.438 Run status group 0 (all jobs): 00:33:56.438 READ: bw=57.5MiB/s (60.3MB/s), 13.8MiB/s-15.2MiB/s (14.5MB/s-16.0MB/s), io=288MiB (302MB), run=5001-5003msec 00:33:56.438 17:44:34 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:56.438 17:44:34 -- target/dif.sh@43 -- # local sub 00:33:56.438 17:44:34 -- target/dif.sh@45 -- # for sub in "$@" 00:33:56.438 17:44:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:56.438 17:44:34 -- target/dif.sh@36 -- # local sub_id=0 00:33:56.438 17:44:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:56.438 17:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 17:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.438 17:44:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:56.438 17:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 17:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.438 17:44:34 -- target/dif.sh@45 -- # for sub in "$@" 00:33:56.438 17:44:34 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:56.438 17:44:34 -- target/dif.sh@36 -- # local sub_id=1 00:33:56.438 17:44:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.438 17:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 17:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.438 17:44:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:56.438 17:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 17:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.438 00:33:56.438 real 0m24.837s 00:33:56.438 user 5m8.379s 00:33:56.438 sys 0m4.238s 00:33:56.438 17:44:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 ************************************ 00:33:56.438 END TEST fio_dif_rand_params 00:33:56.438 ************************************ 00:33:56.438 17:44:34 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:56.438 17:44:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:56.438 17:44:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 ************************************ 00:33:56.438 START TEST fio_dif_digest 00:33:56.438 ************************************ 00:33:56.438 17:44:34 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:33:56.438 17:44:34 -- target/dif.sh@123 -- # local NULL_DIF 00:33:56.438 17:44:34 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:56.438 17:44:34 -- target/dif.sh@125 -- # local hdgst ddgst 00:33:56.438 17:44:34 -- target/dif.sh@127 -- # NULL_DIF=3 00:33:56.438 17:44:34 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:56.438 17:44:34 -- target/dif.sh@127 -- # numjobs=3 00:33:56.438 17:44:34 -- target/dif.sh@127 -- # iodepth=3 00:33:56.438 17:44:34 -- target/dif.sh@127 -- # runtime=10 00:33:56.438 17:44:34 -- target/dif.sh@128 -- # hdgst=true 00:33:56.438 17:44:34 -- target/dif.sh@128 -- # ddgst=true 00:33:56.438 17:44:34 -- target/dif.sh@130 -- # create_subsystems 0 00:33:56.438 17:44:34 -- target/dif.sh@28 -- # local sub 00:33:56.438 17:44:34 -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.438 17:44:34 -- target/dif.sh@31 -- # create_subsystem 0 00:33:56.438 17:44:34 -- target/dif.sh@18 -- # local sub_id=0 00:33:56.438 17:44:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:56.438 17:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 bdev_null0 00:33:56.438 17:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.438 17:44:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:56.438 17:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 17:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.438 17:44:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:56.438 17:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 17:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.438 17:44:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:56.438 17:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.438 17:44:34 -- common/autotest_common.sh@10 -- # set +x 00:33:56.438 [2024-07-12 17:44:34.972477] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.438 17:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.438 17:44:34 -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:56.438 17:44:34 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:56.438 17:44:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:56.438 17:44:34 -- nvmf/common.sh@520 -- # config=() 00:33:56.438 17:44:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.438 17:44:34 -- nvmf/common.sh@520 -- # local subsystem config 00:33:56.438 17:44:34 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.438 17:44:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:56.438 17:44:34 -- target/dif.sh@82 -- # gen_fio_conf 00:33:56.438 17:44:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:56.438 { 00:33:56.438 "params": { 00:33:56.438 "name": "Nvme$subsystem", 00:33:56.438 "trtype": "$TEST_TRANSPORT", 00:33:56.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.438 "adrfam": "ipv4", 00:33:56.438 "trsvcid": "$NVMF_PORT", 00:33:56.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.438 "hdgst": ${hdgst:-false}, 00:33:56.438 "ddgst": ${ddgst:-false} 00:33:56.438 }, 00:33:56.438 "method": "bdev_nvme_attach_controller" 00:33:56.438 } 00:33:56.438 EOF 00:33:56.438 )") 00:33:56.438 17:44:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:56.438 17:44:34 -- target/dif.sh@54 -- # local file 00:33:56.438 17:44:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:56.438 17:44:34 -- target/dif.sh@56 -- # cat 00:33:56.438 17:44:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:56.438 17:44:34 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.438 17:44:34 -- common/autotest_common.sh@1320 -- # shift 00:33:56.439 17:44:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:56.439 17:44:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.439 17:44:34 -- nvmf/common.sh@542 -- # cat 00:33:56.439 17:44:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:56.439 17:44:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.439 17:44:34 -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.439 17:44:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:56.439 17:44:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:56.439 17:44:34 -- nvmf/common.sh@544 -- # jq . 00:33:56.439 17:44:34 -- nvmf/common.sh@545 -- # IFS=, 00:33:56.439 17:44:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:56.439 "params": { 00:33:56.439 "name": "Nvme0", 00:33:56.439 "trtype": "tcp", 00:33:56.439 "traddr": "10.0.0.2", 00:33:56.439 "adrfam": "ipv4", 00:33:56.439 "trsvcid": "4420", 00:33:56.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.439 "hdgst": true, 00:33:56.439 "ddgst": true 00:33:56.439 }, 00:33:56.439 "method": "bdev_nvme_attach_controller" 00:33:56.439 }' 00:33:56.439 17:44:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:56.439 17:44:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:56.439 17:44:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.439 17:44:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.439 17:44:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:56.439 17:44:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:56.439 17:44:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:56.439 17:44:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:56.439 17:44:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:56.439 17:44:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.439 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:56.439 ... 00:33:56.439 fio-3.35 00:33:56.439 Starting 3 threads 00:33:56.696 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.953 [2024-07-12 17:44:35.824933] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:56.953 [2024-07-12 17:44:35.824987] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:09.143 00:34:09.143 filename0: (groupid=0, jobs=1): err= 0: pid=150263: Fri Jul 12 17:44:45 2024 00:34:09.143 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(251MiB/10049msec) 00:34:09.143 slat (nsec): min=4604, max=56066, avg=18879.49, stdev=5017.06 00:34:09.143 clat (usec): min=9428, max=53332, avg=14983.48, stdev=1714.91 00:34:09.143 lat (usec): min=9444, max=53358, avg=15002.36, stdev=1714.85 00:34:09.143 clat percentiles (usec): 00:34:09.143 | 1.00th=[10552], 5.00th=[12911], 10.00th=[13566], 20.00th=[14091], 00:34:09.143 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:34:09.143 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:34:09.144 | 99.00th=[17957], 99.50th=[18220], 99.90th=[22414], 99.95th=[49546], 00:34:09.144 | 99.99th=[53216] 00:34:09.144 bw ( KiB/s): min=24576, max=26880, per=35.10%, avg=25651.20, stdev=596.63, samples=20 00:34:09.144 iops : min= 192, max= 210, avg=200.40, stdev= 4.66, samples=20 00:34:09.144 lat (msec) : 10=0.45%, 20=99.30%, 50=0.20%, 100=0.05% 00:34:09.144 cpu : usr=95.92%, sys=3.71%, ctx=43, majf=0, minf=157 00:34:09.144 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.144 issued rwts: total=2006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.144 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:09.144 filename0: (groupid=0, jobs=1): err= 0: pid=150264: Fri Jul 12 17:44:45 2024 00:34:09.144 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(228MiB/10047msec) 00:34:09.144 slat (nsec): min=9534, max=54940, avg=30249.09, stdev=5989.15 00:34:09.144 clat (usec): min=9652, max=58794, avg=16447.50, stdev=3307.87 00:34:09.144 lat (usec): min=9680, max=58823, avg=16477.75, stdev=3307.83 00:34:09.144 clat percentiles (usec): 00:34:09.144 | 1.00th=[13435], 5.00th=[14615], 10.00th=[14877], 20.00th=[15401], 00:34:09.144 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:34:09.144 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:34:09.144 | 99.00th=[19792], 99.50th=[54264], 99.90th=[58459], 99.95th=[58983], 00:34:09.144 | 99.99th=[58983] 00:34:09.144 bw ( KiB/s): min=20992, max=24576, per=31.95%, avg=23349.45, stdev=983.55, samples=20 00:34:09.144 iops : min= 164, max= 192, avg=182.40, stdev= 7.69, samples=20 00:34:09.144 lat (msec) : 10=0.11%, 20=99.07%, 50=0.27%, 100=0.55% 00:34:09.144 cpu : usr=94.86%, sys=4.55%, ctx=16, majf=0, minf=74 00:34:09.144 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.144 issued rwts: total=1826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.144 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:09.144 filename0: (groupid=0, jobs=1): err= 0: pid=150265: Fri Jul 12 17:44:45 2024 00:34:09.144 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(238MiB/10047msec) 00:34:09.144 slat (nsec): min=9595, max=46929, avg=18828.40, stdev=5992.30 00:34:09.144 clat (usec): min=9410, max=56931, avg=15771.18, stdev=2293.44 00:34:09.144 lat (usec): min=9424, max=56947, avg=15790.01, stdev=2293.55 00:34:09.144 clat percentiles (usec): 00:34:09.144 | 1.00th=[10683], 5.00th=[13960], 10.00th=[14353], 20.00th=[14877], 00:34:09.144 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15664], 60.00th=[15926], 00:34:09.144 | 70.00th=[16188], 80.00th=[16450], 90.00th=[17171], 95.00th=[17433], 00:34:09.144 | 99.00th=[18744], 99.50th=[19268], 99.90th=[56361], 99.95th=[56886], 00:34:09.144 | 99.99th=[56886] 00:34:09.144 bw ( KiB/s): min=22528, max=25600, per=33.33%, avg=24358.40, stdev=696.27, samples=20 00:34:09.144 iops : min= 176, max= 200, avg=190.30, stdev= 5.44, samples=20 00:34:09.144 lat (msec) : 10=0.16%, 20=99.37%, 50=0.26%, 100=0.21% 00:34:09.144 cpu : usr=95.89%, sys=3.74%, ctx=22, majf=0, minf=119 00:34:09.144 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.144 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.144 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:09.144 00:34:09.144 Run status group 0 (all jobs): 00:34:09.144 READ: bw=71.4MiB/s (74.8MB/s), 22.7MiB/s-25.0MiB/s (23.8MB/s-26.2MB/s), io=717MiB (752MB), run=10047-10049msec 00:34:09.144 17:44:46 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:09.144 17:44:46 -- target/dif.sh@43 -- # local sub 00:34:09.144 17:44:46 -- target/dif.sh@45 -- # for sub in "$@" 00:34:09.144 17:44:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:09.144 17:44:46 -- target/dif.sh@36 -- # local sub_id=0 00:34:09.144 17:44:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:09.144 17:44:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.144 17:44:46 -- common/autotest_common.sh@10 -- # set +x 00:34:09.144 17:44:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.144 17:44:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:09.144 17:44:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.144 17:44:46 -- common/autotest_common.sh@10 -- # set +x 00:34:09.144 17:44:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.144 00:34:09.144 real 0m11.238s 00:34:09.144 user 0m40.126s 00:34:09.144 sys 0m1.517s 00:34:09.144 17:44:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:09.144 17:44:46 -- common/autotest_common.sh@10 -- # set +x 00:34:09.144 ************************************ 00:34:09.144 END TEST fio_dif_digest 00:34:09.144 ************************************ 00:34:09.144 17:44:46 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:09.144 17:44:46 -- target/dif.sh@147 -- # nvmftestfini 00:34:09.144 17:44:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:09.144 17:44:46 -- nvmf/common.sh@116 -- # sync 00:34:09.144 17:44:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:09.144 17:44:46 -- nvmf/common.sh@119 -- # set +e 00:34:09.144 17:44:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:09.144 17:44:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:09.144 rmmod nvme_tcp 00:34:09.144 rmmod nvme_fabrics 00:34:09.144 rmmod nvme_keyring 00:34:09.144 17:44:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:09.144 17:44:46 -- nvmf/common.sh@123 -- # set -e 00:34:09.144 17:44:46 -- nvmf/common.sh@124 -- # return 0 00:34:09.144 17:44:46 -- nvmf/common.sh@477 -- # '[' -n 140719 ']' 00:34:09.144 17:44:46 -- nvmf/common.sh@478 -- # killprocess 140719 00:34:09.144 17:44:46 -- common/autotest_common.sh@926 -- # '[' -z 140719 ']' 00:34:09.144 17:44:46 -- common/autotest_common.sh@930 -- # kill -0 140719 00:34:09.144 17:44:46 -- common/autotest_common.sh@931 -- # uname 00:34:09.144 17:44:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:09.144 17:44:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140719 00:34:09.144 17:44:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:09.144 17:44:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:09.144 17:44:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140719' 00:34:09.144 killing process with pid 140719 00:34:09.144 17:44:46 -- common/autotest_common.sh@945 -- # kill 140719 00:34:09.144 17:44:46 -- common/autotest_common.sh@950 -- # wait 140719 00:34:09.144 17:44:46 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:09.144 17:44:46 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:10.082 Waiting for block devices as requested 00:34:10.082 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:34:10.341 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:10.341 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:10.341 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:10.341 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:10.600 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:10.600 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:10.600 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:10.860 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:10.860 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:10.860 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:10.860 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:11.118 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:11.118 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:11.118 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:11.376 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:11.376 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:11.376 17:44:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:11.376 17:44:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:11.376 17:44:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:11.376 17:44:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:11.376 17:44:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.376 17:44:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:11.376 17:44:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.913 17:44:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:13.913 00:34:13.913 real 1m13.842s 00:34:13.913 user 7m41.194s 00:34:13.913 sys 0m17.636s 00:34:13.913 17:44:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:13.913 17:44:52 -- common/autotest_common.sh@10 -- # set +x 00:34:13.913 ************************************ 00:34:13.913 END TEST nvmf_dif 00:34:13.913 ************************************ 00:34:13.913 17:44:52 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:13.913 17:44:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:13.913 17:44:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:13.913 17:44:52 -- common/autotest_common.sh@10 -- # set +x 00:34:13.913 ************************************ 00:34:13.913 START TEST nvmf_abort_qd_sizes 00:34:13.913 ************************************ 00:34:13.913 17:44:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:13.913 * Looking for test storage... 00:34:13.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.913 17:44:52 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.913 17:44:52 -- nvmf/common.sh@7 -- # uname -s 00:34:13.913 17:44:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.913 17:44:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.913 17:44:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.913 17:44:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.913 17:44:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.913 17:44:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.913 17:44:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.913 17:44:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.913 17:44:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.913 17:44:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.913 17:44:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:13.913 17:44:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:13.913 17:44:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.913 17:44:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.913 17:44:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.913 17:44:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.913 17:44:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.913 17:44:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.913 17:44:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.913 17:44:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.914 17:44:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.914 17:44:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.914 17:44:52 -- paths/export.sh@5 -- # export PATH 00:34:13.914 17:44:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.914 17:44:52 -- nvmf/common.sh@46 -- # : 0 00:34:13.914 17:44:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:13.914 17:44:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:13.914 17:44:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:13.914 17:44:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.914 17:44:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.914 17:44:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:13.914 17:44:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:13.914 17:44:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:13.914 17:44:52 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:34:13.914 17:44:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:13.914 17:44:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.914 17:44:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:13.914 17:44:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:13.914 17:44:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:13.914 17:44:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.914 17:44:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.914 17:44:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.914 17:44:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:13.914 17:44:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:13.914 17:44:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:13.914 17:44:52 -- common/autotest_common.sh@10 -- # set +x 00:34:19.181 17:44:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:19.181 17:44:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:19.181 17:44:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:19.182 17:44:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:19.182 17:44:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:19.182 17:44:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:19.182 17:44:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:19.182 17:44:57 -- nvmf/common.sh@294 -- # net_devs=() 00:34:19.182 17:44:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:19.182 17:44:57 -- nvmf/common.sh@295 -- # e810=() 00:34:19.182 17:44:57 -- nvmf/common.sh@295 -- # local -ga e810 00:34:19.182 17:44:57 -- nvmf/common.sh@296 -- # x722=() 00:34:19.182 17:44:57 -- nvmf/common.sh@296 -- # local -ga x722 00:34:19.182 17:44:57 -- nvmf/common.sh@297 -- # mlx=() 00:34:19.182 17:44:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:19.182 17:44:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.182 17:44:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:19.182 17:44:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:19.182 17:44:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:19.182 17:44:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:19.182 17:44:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:19.182 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:19.182 17:44:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:19.182 17:44:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:19.182 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:19.182 17:44:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:19.182 17:44:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:19.182 17:44:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.182 17:44:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:19.182 17:44:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.182 17:44:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:19.182 Found net devices under 0000:af:00.0: cvl_0_0 00:34:19.182 17:44:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.182 17:44:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:19.182 17:44:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.182 17:44:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:19.182 17:44:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.182 17:44:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:19.182 Found net devices under 0000:af:00.1: cvl_0_1 00:34:19.182 17:44:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.182 17:44:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:19.182 17:44:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:19.182 17:44:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:19.182 17:44:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:19.182 17:44:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.182 17:44:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.182 17:44:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.182 17:44:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:19.182 17:44:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.182 17:44:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.182 17:44:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:19.182 17:44:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.182 17:44:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.182 17:44:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:19.182 17:44:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:19.182 17:44:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.182 17:44:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.182 17:44:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.182 17:44:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.182 17:44:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:19.182 17:44:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.182 17:44:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.182 17:44:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.182 17:44:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:19.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:34:19.182 00:34:19.182 --- 10.0.0.2 ping statistics --- 00:34:19.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.182 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:34:19.182 17:44:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:34:19.182 00:34:19.182 --- 10.0.0.1 ping statistics --- 00:34:19.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.182 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:34:19.182 17:44:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.182 17:44:57 -- nvmf/common.sh@410 -- # return 0 00:34:19.182 17:44:57 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:19.182 17:44:57 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:21.713 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:21.713 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:22.649 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:34:22.649 17:45:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.649 17:45:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:22.649 17:45:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:22.649 17:45:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.649 17:45:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:22.649 17:45:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:22.649 17:45:01 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:34:22.649 17:45:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:22.649 17:45:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:22.649 17:45:01 -- common/autotest_common.sh@10 -- # set +x 00:34:22.649 17:45:01 -- nvmf/common.sh@469 -- # nvmfpid=158582 00:34:22.649 17:45:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:22.649 17:45:01 -- nvmf/common.sh@470 -- # waitforlisten 158582 00:34:22.649 17:45:01 -- common/autotest_common.sh@819 -- # '[' -z 158582 ']' 00:34:22.649 17:45:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.649 17:45:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:22.649 17:45:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.649 17:45:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:22.649 17:45:01 -- common/autotest_common.sh@10 -- # set +x 00:34:22.649 [2024-07-12 17:45:01.534339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:34:22.649 [2024-07-12 17:45:01.534390] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.649 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.908 [2024-07-12 17:45:01.622431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.908 [2024-07-12 17:45:01.666285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:22.908 [2024-07-12 17:45:01.666436] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.908 [2024-07-12 17:45:01.666448] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.908 [2024-07-12 17:45:01.666457] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.908 [2024-07-12 17:45:01.666509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.908 [2024-07-12 17:45:01.666607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.908 [2024-07-12 17:45:01.666699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:22.908 [2024-07-12 17:45:01.666701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.842 17:45:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:23.842 17:45:02 -- common/autotest_common.sh@852 -- # return 0 00:34:23.842 17:45:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:23.842 17:45:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:23.842 17:45:02 -- common/autotest_common.sh@10 -- # set +x 00:34:23.842 17:45:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:34:23.842 17:45:02 -- scripts/common.sh@311 -- # local bdf bdfs 00:34:23.842 17:45:02 -- scripts/common.sh@312 -- # local nvmes 00:34:23.842 17:45:02 -- scripts/common.sh@314 -- # [[ -n 0000:86:00.0 ]] 00:34:23.842 17:45:02 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:23.842 17:45:02 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:34:23.842 17:45:02 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:34:23.842 17:45:02 -- scripts/common.sh@322 -- # uname -s 00:34:23.842 17:45:02 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:34:23.842 17:45:02 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:34:23.842 17:45:02 -- scripts/common.sh@327 -- # (( 1 )) 00:34:23.842 17:45:02 -- scripts/common.sh@328 -- # printf '%s\n' 0000:86:00.0 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:86:00.0 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:34:23.842 17:45:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:23.842 17:45:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:23.842 17:45:02 -- common/autotest_common.sh@10 -- # set +x 00:34:23.842 ************************************ 00:34:23.842 START TEST spdk_target_abort 00:34:23.842 ************************************ 00:34:23.842 17:45:02 -- common/autotest_common.sh@1104 -- # spdk_target 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:23.842 17:45:02 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:34:23.842 17:45:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.842 17:45:02 -- common/autotest_common.sh@10 -- # set +x 00:34:27.119 spdk_targetn1 00:34:27.119 17:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:27.119 17:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:27.119 17:45:05 -- common/autotest_common.sh@10 -- # set +x 00:34:27.119 [2024-07-12 17:45:05.388075] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.119 17:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:34:27.119 17:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:27.119 17:45:05 -- common/autotest_common.sh@10 -- # set +x 00:34:27.119 17:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:34:27.119 17:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:27.119 17:45:05 -- common/autotest_common.sh@10 -- # set +x 00:34:27.119 17:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:34:27.119 17:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:27.119 17:45:05 -- common/autotest_common.sh@10 -- # set +x 00:34:27.119 [2024-07-12 17:45:05.424369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.119 17:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:27.119 17:45:05 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:27.119 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.398 Initializing NVMe Controllers 00:34:30.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:30.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:30.398 Initialization complete. Launching workers. 00:34:30.398 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 14497, failed: 0 00:34:30.398 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1659, failed to submit 12838 00:34:30.398 success 732, unsuccess 927, failed 0 00:34:30.398 17:45:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:30.398 17:45:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:30.398 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.923 [2024-07-12 17:45:11.847294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:32.923 [2024-07-12 17:45:11.847428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d450 is same with the state(5) to be set 00:34:33.194 Initializing NVMe Controllers 00:34:33.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:33.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:33.194 Initialization complete. Launching workers. 00:34:33.194 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8486, failed: 0 00:34:33.194 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1193, failed to submit 7293 00:34:33.194 success 355, unsuccess 838, failed 0 00:34:33.194 17:45:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:33.194 17:45:11 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:33.194 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.509 Initializing NVMe Controllers 00:34:36.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:36.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:36.509 Initialization complete. Launching workers. 00:34:36.509 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 39193, failed: 0 00:34:36.509 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2640, failed to submit 36553 00:34:36.509 success 582, unsuccess 2058, failed 0 00:34:36.509 17:45:15 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:34:36.509 17:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.509 17:45:15 -- common/autotest_common.sh@10 -- # set +x 00:34:36.509 17:45:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.509 17:45:15 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:36.509 17:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.509 17:45:15 -- common/autotest_common.sh@10 -- # set +x 00:34:37.887 17:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.887 17:45:16 -- target/abort_qd_sizes.sh@62 -- # killprocess 158582 00:34:37.887 17:45:16 -- common/autotest_common.sh@926 -- # '[' -z 158582 ']' 00:34:37.887 17:45:16 -- common/autotest_common.sh@930 -- # kill -0 158582 00:34:37.887 17:45:16 -- common/autotest_common.sh@931 -- # uname 00:34:37.887 17:45:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:37.887 17:45:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 158582 00:34:37.887 17:45:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:37.887 17:45:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:37.887 17:45:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 158582' 00:34:37.887 killing process with pid 158582 00:34:37.887 17:45:16 -- common/autotest_common.sh@945 -- # kill 158582 00:34:37.887 17:45:16 -- common/autotest_common.sh@950 -- # wait 158582 00:34:37.887 00:34:37.887 real 0m14.197s 00:34:37.887 user 0m57.293s 00:34:37.887 sys 0m2.017s 00:34:37.887 17:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:37.887 17:45:16 -- common/autotest_common.sh@10 -- # set +x 00:34:37.887 ************************************ 00:34:37.887 END TEST spdk_target_abort 00:34:37.887 ************************************ 00:34:37.887 17:45:16 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:34:37.887 17:45:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:37.887 17:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:37.887 17:45:16 -- common/autotest_common.sh@10 -- # set +x 00:34:37.887 ************************************ 00:34:37.887 START TEST kernel_target_abort 00:34:37.887 ************************************ 00:34:37.887 17:45:16 -- common/autotest_common.sh@1104 -- # kernel_target 00:34:37.887 17:45:16 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:34:37.887 17:45:16 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:34:37.887 17:45:16 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:34:37.887 17:45:16 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:34:37.887 17:45:16 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:34:37.887 17:45:16 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:37.887 17:45:16 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:37.887 17:45:16 -- nvmf/common.sh@627 -- # local block nvme 00:34:37.887 17:45:16 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:34:37.887 17:45:16 -- nvmf/common.sh@630 -- # modprobe nvmet 00:34:37.887 17:45:16 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:37.887 17:45:16 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:41.172 Waiting for block devices as requested 00:34:41.172 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:34:41.172 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:41.172 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:41.172 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:41.172 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:41.172 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:41.172 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:41.172 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:41.431 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:41.431 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:41.431 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:41.431 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:41.720 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:41.720 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:41.720 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:41.979 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:41.979 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:41.979 17:45:20 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:34:41.979 17:45:20 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:41.979 17:45:20 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:34:41.979 17:45:20 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:34:41.979 17:45:20 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:41.979 No valid GPT data, bailing 00:34:41.979 17:45:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:41.979 17:45:20 -- scripts/common.sh@393 -- # pt= 00:34:41.979 17:45:20 -- scripts/common.sh@394 -- # return 1 00:34:41.979 17:45:20 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:34:41.979 17:45:20 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:34:41.979 17:45:20 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:41.979 17:45:20 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:41.979 17:45:20 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:41.979 17:45:20 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:34:41.979 17:45:20 -- nvmf/common.sh@654 -- # echo 1 00:34:41.979 17:45:20 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:34:41.979 17:45:20 -- nvmf/common.sh@656 -- # echo 1 00:34:41.979 17:45:20 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:34:41.979 17:45:20 -- nvmf/common.sh@663 -- # echo tcp 00:34:41.979 17:45:20 -- nvmf/common.sh@664 -- # echo 4420 00:34:42.238 17:45:20 -- nvmf/common.sh@665 -- # echo ipv4 00:34:42.238 17:45:20 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:42.238 17:45:20 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:42.238 00:34:42.238 Discovery Log Number of Records 2, Generation counter 2 00:34:42.238 =====Discovery Log Entry 0====== 00:34:42.238 trtype: tcp 00:34:42.238 adrfam: ipv4 00:34:42.238 subtype: current discovery subsystem 00:34:42.238 treq: not specified, sq flow control disable supported 00:34:42.238 portid: 1 00:34:42.238 trsvcid: 4420 00:34:42.238 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:42.238 traddr: 10.0.0.1 00:34:42.238 eflags: none 00:34:42.238 sectype: none 00:34:42.238 =====Discovery Log Entry 1====== 00:34:42.238 trtype: tcp 00:34:42.238 adrfam: ipv4 00:34:42.238 subtype: nvme subsystem 00:34:42.238 treq: not specified, sq flow control disable supported 00:34:42.238 portid: 1 00:34:42.238 trsvcid: 4420 00:34:42.238 subnqn: kernel_target 00:34:42.238 traddr: 10.0.0.1 00:34:42.238 eflags: none 00:34:42.238 sectype: none 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:42.238 17:45:21 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:42.238 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.524 Initializing NVMe Controllers 00:34:45.524 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:45.524 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:45.524 Initialization complete. Launching workers. 00:34:45.524 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 55393, failed: 0 00:34:45.524 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 55393, failed to submit 0 00:34:45.524 success 0, unsuccess 55393, failed 0 00:34:45.524 17:45:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.524 17:45:24 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:45.524 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.813 Initializing NVMe Controllers 00:34:48.813 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:48.813 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:48.813 Initialization complete. Launching workers. 00:34:48.813 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 89651, failed: 0 00:34:48.813 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22590, failed to submit 67061 00:34:48.813 success 0, unsuccess 22590, failed 0 00:34:48.813 17:45:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:48.813 17:45:27 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:48.813 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.104 Initializing NVMe Controllers 00:34:52.104 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:52.104 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:52.104 Initialization complete. Launching workers. 00:34:52.104 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 85995, failed: 0 00:34:52.104 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 21482, failed to submit 64513 00:34:52.104 success 0, unsuccess 21482, failed 0 00:34:52.104 17:45:30 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:34:52.104 17:45:30 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:34:52.104 17:45:30 -- nvmf/common.sh@677 -- # echo 0 00:34:52.104 17:45:30 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:34:52.104 17:45:30 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:52.104 17:45:30 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:52.104 17:45:30 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:52.104 17:45:30 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:34:52.104 17:45:30 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:34:52.104 00:34:52.104 real 0m13.615s 00:34:52.104 user 0m7.647s 00:34:52.104 sys 0m3.033s 00:34:52.104 17:45:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:52.104 17:45:30 -- common/autotest_common.sh@10 -- # set +x 00:34:52.104 ************************************ 00:34:52.104 END TEST kernel_target_abort 00:34:52.104 ************************************ 00:34:52.104 17:45:30 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:34:52.104 17:45:30 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:34:52.104 17:45:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:52.104 17:45:30 -- nvmf/common.sh@116 -- # sync 00:34:52.104 17:45:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:52.104 17:45:30 -- nvmf/common.sh@119 -- # set +e 00:34:52.104 17:45:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:52.104 17:45:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:52.104 rmmod nvme_tcp 00:34:52.104 rmmod nvme_fabrics 00:34:52.104 rmmod nvme_keyring 00:34:52.104 17:45:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:52.104 17:45:30 -- nvmf/common.sh@123 -- # set -e 00:34:52.104 17:45:30 -- nvmf/common.sh@124 -- # return 0 00:34:52.104 17:45:30 -- nvmf/common.sh@477 -- # '[' -n 158582 ']' 00:34:52.104 17:45:30 -- nvmf/common.sh@478 -- # killprocess 158582 00:34:52.104 17:45:30 -- common/autotest_common.sh@926 -- # '[' -z 158582 ']' 00:34:52.104 17:45:30 -- common/autotest_common.sh@930 -- # kill -0 158582 00:34:52.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (158582) - No such process 00:34:52.104 17:45:30 -- common/autotest_common.sh@953 -- # echo 'Process with pid 158582 is not found' 00:34:52.104 Process with pid 158582 is not found 00:34:52.104 17:45:30 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:52.104 17:45:30 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:54.639 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:34:54.639 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:54.639 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:54.639 17:45:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:54.639 17:45:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:54.639 17:45:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:54.639 17:45:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:54.639 17:45:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.639 17:45:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:54.639 17:45:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.172 17:45:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:57.172 00:34:57.172 real 0m43.164s 00:34:57.172 user 1m8.967s 00:34:57.172 sys 0m13.109s 00:34:57.172 17:45:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:57.172 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:57.172 ************************************ 00:34:57.172 END TEST nvmf_abort_qd_sizes 00:34:57.172 ************************************ 00:34:57.172 17:45:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:57.172 17:45:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:57.172 17:45:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:57.172 17:45:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:57.172 17:45:35 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:34:57.172 17:45:35 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:34:57.172 17:45:35 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:34:57.172 17:45:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:57.172 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:57.172 17:45:35 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:34:57.172 17:45:35 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:34:57.172 17:45:35 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:34:57.172 17:45:35 -- common/autotest_common.sh@10 -- # set +x 00:35:02.444 INFO: APP EXITING 00:35:02.444 INFO: killing all VMs 00:35:02.444 INFO: killing vhost app 00:35:02.444 WARN: no vhost pid file found 00:35:02.444 INFO: EXIT DONE 00:35:04.346 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:35:04.346 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:04.346 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:04.604 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:04.604 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:04.604 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:04.604 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:04.604 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:07.892 Cleaning 00:35:07.892 Removing: /var/run/dpdk/spdk0/config 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:07.892 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:07.892 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:07.892 Removing: /var/run/dpdk/spdk1/config 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:07.892 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:07.892 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:07.892 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:07.892 Removing: /var/run/dpdk/spdk2/config 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:07.892 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:07.892 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:07.892 Removing: /var/run/dpdk/spdk3/config 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:07.892 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:07.892 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:07.892 Removing: /var/run/dpdk/spdk4/config 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:07.892 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:07.892 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:07.892 Removing: /dev/shm/bdev_svc_trace.1 00:35:07.892 Removing: /dev/shm/nvmf_trace.0 00:35:07.892 Removing: /dev/shm/spdk_tgt_trace.pid3914010 00:35:07.892 Removing: /var/run/dpdk/spdk0 00:35:07.892 Removing: /var/run/dpdk/spdk1 00:35:07.892 Removing: /var/run/dpdk/spdk2 00:35:07.892 Removing: /var/run/dpdk/spdk3 00:35:07.892 Removing: /var/run/dpdk/spdk4 00:35:07.892 Removing: /var/run/dpdk/spdk_pid102142 00:35:07.892 Removing: /var/run/dpdk/spdk_pid108180 00:35:07.892 Removing: /var/run/dpdk/spdk_pid108845 00:35:07.892 Removing: /var/run/dpdk/spdk_pid109530 00:35:07.892 Removing: /var/run/dpdk/spdk_pid110183 00:35:07.892 Removing: /var/run/dpdk/spdk_pid111040 00:35:07.892 Removing: /var/run/dpdk/spdk_pid111878 00:35:07.892 Removing: /var/run/dpdk/spdk_pid112757 00:35:07.892 Removing: /var/run/dpdk/spdk_pid113322 00:35:07.892 Removing: /var/run/dpdk/spdk_pid118177 00:35:07.892 Removing: /var/run/dpdk/spdk_pid118460 00:35:07.892 Removing: /var/run/dpdk/spdk_pid124710 00:35:07.892 Removing: /var/run/dpdk/spdk_pid125020 00:35:07.892 Removing: /var/run/dpdk/spdk_pid127436 00:35:07.892 Removing: /var/run/dpdk/spdk_pid135639 00:35:07.892 Removing: /var/run/dpdk/spdk_pid135645 00:35:07.892 Removing: /var/run/dpdk/spdk_pid141026 00:35:07.892 Removing: /var/run/dpdk/spdk_pid143045 00:35:07.892 Removing: /var/run/dpdk/spdk_pid145309 00:35:07.892 Removing: /var/run/dpdk/spdk_pid146517 00:35:07.892 Removing: /var/run/dpdk/spdk_pid148806 00:35:07.892 Removing: /var/run/dpdk/spdk_pid150024 00:35:07.892 Removing: /var/run/dpdk/spdk_pid159431 00:35:07.892 Removing: /var/run/dpdk/spdk_pid160229 00:35:07.892 Removing: /var/run/dpdk/spdk_pid160885 00:35:07.892 Removing: /var/run/dpdk/spdk_pid163383 00:35:07.892 Removing: /var/run/dpdk/spdk_pid163919 00:35:07.892 Removing: /var/run/dpdk/spdk_pid164452 00:35:07.892 Removing: /var/run/dpdk/spdk_pid22696 00:35:07.892 Removing: /var/run/dpdk/spdk_pid27425 00:35:07.892 Removing: /var/run/dpdk/spdk_pid33820 00:35:07.892 Removing: /var/run/dpdk/spdk_pid35231 00:35:07.892 Removing: /var/run/dpdk/spdk_pid36789 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3911565 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3912803 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3914010 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3914748 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3916726 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3917966 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3918329 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3918811 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3919153 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3919474 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3919763 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3920045 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3920350 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3921194 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3924624 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3924919 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3925424 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3925486 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3926050 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3926315 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3926739 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3926888 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3927190 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3927461 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3927751 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3927839 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3928387 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3928667 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3928991 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3929292 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3929321 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3929584 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3929781 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3930025 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3930208 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3930474 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3930746 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3931028 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3931292 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3931579 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3931843 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3932127 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3932390 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3932677 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3932941 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3933223 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3933424 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3933664 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3933863 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3934110 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3934352 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3934627 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3934900 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3935184 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3935448 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3935735 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3935999 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3936283 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3936553 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3936834 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3937099 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3937386 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3937595 00:35:07.892 Removing: /var/run/dpdk/spdk_pid3937833 00:35:08.151 Removing: /var/run/dpdk/spdk_pid3938038 00:35:08.151 Removing: /var/run/dpdk/spdk_pid3938281 00:35:08.151 Removing: /var/run/dpdk/spdk_pid3938526 00:35:08.151 Removing: /var/run/dpdk/spdk_pid3938802 00:35:08.151 Removing: /var/run/dpdk/spdk_pid3939068 00:35:08.151 Removing: /var/run/dpdk/spdk_pid3939350 00:35:08.151 Removing: /var/run/dpdk/spdk_pid3939622 00:35:08.152 Removing: /var/run/dpdk/spdk_pid3939906 00:35:08.152 Removing: /var/run/dpdk/spdk_pid3940069 00:35:08.152 Removing: /var/run/dpdk/spdk_pid3940304 00:35:08.152 Removing: /var/run/dpdk/spdk_pid3944166 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4032223 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4037236 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4048497 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4054197 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4058494 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4059282 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4065583 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4065591 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4066646 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4067453 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4068510 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4069044 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4069173 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4069514 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4069584 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4069586 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4070643 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4071581 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4072514 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4073181 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4073310 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4073576 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4075000 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4076383 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4085807 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4086215 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4090806 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4097021 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4100232 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4111325 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4120601 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4122546 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4123507 00:35:08.152 Removing: /var/run/dpdk/spdk_pid41343 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4142025 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4146052 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4150930 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4152777 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4154649 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4154911 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4154927 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4155193 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4155775 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4157650 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4158782 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4159443 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4165336 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4171144 00:35:08.152 Removing: /var/run/dpdk/spdk_pid4177186 00:35:08.152 Removing: /var/run/dpdk/spdk_pid45657 00:35:08.152 Removing: /var/run/dpdk/spdk_pid53482 00:35:08.152 Removing: /var/run/dpdk/spdk_pid53491 00:35:08.152 Removing: /var/run/dpdk/spdk_pid58348 00:35:08.152 Removing: /var/run/dpdk/spdk_pid58608 00:35:08.152 Removing: /var/run/dpdk/spdk_pid58880 00:35:08.410 Removing: /var/run/dpdk/spdk_pid59336 00:35:08.410 Removing: /var/run/dpdk/spdk_pid59407 00:35:08.410 Removing: /var/run/dpdk/spdk_pid61021 00:35:08.410 Removing: /var/run/dpdk/spdk_pid62871 00:35:08.410 Removing: /var/run/dpdk/spdk_pid64687 00:35:08.410 Removing: /var/run/dpdk/spdk_pid66461 00:35:08.410 Removing: /var/run/dpdk/spdk_pid68312 00:35:08.410 Removing: /var/run/dpdk/spdk_pid70563 00:35:08.410 Removing: /var/run/dpdk/spdk_pid76994 00:35:08.410 Removing: /var/run/dpdk/spdk_pid77655 00:35:08.410 Removing: /var/run/dpdk/spdk_pid79670 00:35:08.410 Removing: /var/run/dpdk/spdk_pid80877 00:35:08.410 Removing: /var/run/dpdk/spdk_pid87160 00:35:08.410 Removing: /var/run/dpdk/spdk_pid90104 00:35:08.410 Removing: /var/run/dpdk/spdk_pid96021 00:35:08.410 Clean 00:35:08.410 killing process with pid 3861309 00:35:16.544 killing process with pid 3861306 00:35:16.544 killing process with pid 3861308 00:35:16.803 killing process with pid 3861307 00:35:16.803 17:45:55 -- common/autotest_common.sh@1436 -- # return 0 00:35:16.803 17:45:55 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:35:16.803 17:45:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:16.803 17:45:55 -- common/autotest_common.sh@10 -- # set +x 00:35:17.063 17:45:55 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:35:17.063 17:45:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:17.063 17:45:55 -- common/autotest_common.sh@10 -- # set +x 00:35:17.063 17:45:55 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:17.063 17:45:55 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:17.063 17:45:55 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:17.063 17:45:55 -- spdk/autotest.sh@394 -- # hash lcov 00:35:17.063 17:45:55 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:17.063 17:45:55 -- spdk/autotest.sh@396 -- # hostname 00:35:17.063 17:45:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:17.063 geninfo: WARNING: invalid characters removed from testname! 00:35:49.149 17:46:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:53.420 17:46:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:59.993 17:46:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:05.270 17:46:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:11.840 17:46:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:17.114 17:46:55 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:23.687 17:47:01 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:23.687 17:47:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:23.687 17:47:01 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:23.687 17:47:01 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:23.687 17:47:01 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:23.687 17:47:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.687 17:47:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.687 17:47:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.687 17:47:01 -- paths/export.sh@5 -- $ export PATH 00:36:23.687 17:47:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.687 17:47:01 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:23.687 17:47:01 -- common/autobuild_common.sh@435 -- $ date +%s 00:36:23.687 17:47:01 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720799221.XXXXXX 00:36:23.687 17:47:01 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720799221.RYhLad 00:36:23.687 17:47:01 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:36:23.687 17:47:01 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:36:23.687 17:47:01 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:23.687 17:47:01 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:23.687 17:47:01 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:23.687 17:47:01 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:23.687 17:47:01 -- common/autobuild_common.sh@451 -- $ get_config_params 00:36:23.687 17:47:01 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:36:23.687 17:47:01 -- common/autotest_common.sh@10 -- $ set +x 00:36:23.687 17:47:01 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:23.687 17:47:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:36:23.687 17:47:01 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:23.687 17:47:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:23.687 17:47:01 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:23.687 17:47:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:23.687 17:47:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:23.687 17:47:01 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:23.687 17:47:01 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:23.687 17:47:01 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:23.687 17:47:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:23.687 + [[ -n 3806370 ]] 00:36:23.687 + sudo kill 3806370 00:36:23.697 [Pipeline] } 00:36:23.718 [Pipeline] // stage 00:36:23.724 [Pipeline] } 00:36:23.744 [Pipeline] // timeout 00:36:23.750 [Pipeline] } 00:36:23.768 [Pipeline] // catchError 00:36:23.774 [Pipeline] } 00:36:23.790 [Pipeline] // wrap 00:36:23.796 [Pipeline] } 00:36:23.812 [Pipeline] // catchError 00:36:23.822 [Pipeline] stage 00:36:23.824 [Pipeline] { (Epilogue) 00:36:23.842 [Pipeline] catchError 00:36:23.844 [Pipeline] { 00:36:23.861 [Pipeline] echo 00:36:23.863 Cleanup processes 00:36:23.871 [Pipeline] sh 00:36:24.154 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:24.154 178557 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:24.167 [Pipeline] sh 00:36:24.448 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:24.448 ++ grep -v 'sudo pgrep' 00:36:24.448 ++ awk '{print $1}' 00:36:24.448 + sudo kill -9 00:36:24.448 + true 00:36:24.461 [Pipeline] sh 00:36:24.746 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:42.863 [Pipeline] sh 00:36:43.146 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:43.146 Artifacts sizes are good 00:36:43.162 [Pipeline] archiveArtifacts 00:36:43.170 Archiving artifacts 00:36:43.434 [Pipeline] sh 00:36:43.769 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:43.784 [Pipeline] cleanWs 00:36:43.794 [WS-CLEANUP] Deleting project workspace... 00:36:43.794 [WS-CLEANUP] Deferred wipeout is used... 00:36:43.801 [WS-CLEANUP] done 00:36:43.803 [Pipeline] } 00:36:43.825 [Pipeline] // catchError 00:36:43.839 [Pipeline] sh 00:36:44.134 + logger -p user.info -t JENKINS-CI 00:36:44.145 [Pipeline] } 00:36:44.154 [Pipeline] // stage 00:36:44.158 [Pipeline] } 00:36:44.167 [Pipeline] // node 00:36:44.170 [Pipeline] End of Pipeline 00:36:44.191 Finished: SUCCESS